perm filename MENTAL.XGP[W81,JMC]1 blob sn#502004 filedate 1981-02-26 generic text, type T, neo UTF8
/LMAR=0/XLINE=3/FONT#0=BAXL30/FONT#1=BAXM30/FONT#2=BASB30/FONT#3=SUB/FONT#4=SUP/FONT#5=BASL35/FONT#6=NGR25/FONT#7=MATH30/FONT#8=FIX25/FONT#9=GRKB30
␈↓ α∧␈↓␈↓ u1


␈↓ α∧␈↓␈↓ εddraft





















␈↓ α∧␈↓α␈↓ ∧↔ASCRIBING MENTAL QUALITIES TO MACHINES

␈↓ α∧␈↓Abstract:␈αAscribing␈αmental␈α
qualities␈αlike␈α␈↓↓beliefs,␈↓␈α
␈↓↓intentions␈↓␈αand␈α␈↓↓wants␈↓␈α
to␈αa␈αmachine␈α
is␈αsometimes
␈↓ α∧␈↓correct␈αif␈αdone␈αconservatively␈αand␈αis␈αsometimes␈α
necessary␈αto␈αexpress␈αwhat␈αis␈αknown␈αabout␈αits␈α
state.
␈↓ α∧␈↓We␈α
propose␈α
some␈α
new␈α
definitional␈α
tools␈αfor␈α
this:␈α
definitions␈α
relative␈α
to␈α
an␈α
approximate␈αtheory␈α
and
␈↓ α∧␈↓second␈αorder␈α
structural␈αdefinitions.␈α
 This␈αpaper␈α
is␈αto␈α
be␈αpublished␈α
in␈α␈↓↓Philosophical␈αPerspectives␈α
in
␈↓ α∧␈↓↓Artificial␈α∂Intelligence␈↓␈α∂edited␈α∂by␈α∂Martin␈α∂Ringle␈α⊂and␈α∂to␈α∂be␈α∂published␈α∂by␈α∂Humanities␈α⊂Press,␈α∂July
␈↓ α∧␈↓1979.
␈↓ α∧␈↓␈↓ u2


␈↓ α∧␈↓αINTRODUCTION

␈↓ α∧␈↓␈↓ αTTo␈αascribe␈αcertain␈α␈↓↓beliefs␈↓,␈α␈↓↓knowledge␈↓,␈α␈↓↓free␈αwill␈↓,␈α␈↓↓intentions␈↓,␈α␈↓↓consciousness␈↓,␈α␈↓↓abilities␈↓␈αor␈α␈↓↓wants␈↓␈αto
␈↓ α∧␈↓a␈α∩machine␈α∪or␈α∩computer␈α∩program␈α∪is␈α∩␈↓αlegitimate␈↓␈α∪when␈α∩such␈α∩an␈α∪ascription␈α∩expresses␈α∪the␈α∩same
␈↓ α∧␈↓information␈αabout␈αthe␈αmachine␈αthat␈αit␈αexpresses␈αabout␈αa␈αperson.␈α It␈αis␈α␈↓αuseful␈↓␈αwhen␈αthe␈αascription
␈↓ α∧␈↓helps␈αus␈αunderstand␈αthe␈αstructure␈αof␈αthe␈αmachine,␈αits␈αpast␈αor␈αfuture␈αbehavior,␈αor␈αhow␈αto␈αrepair␈αor
␈↓ α∧␈↓improve␈α∞it.␈α∞ It␈α
is␈α∞perhaps␈α∞never␈α
␈↓αlogically␈α∞required␈↓␈α∞even␈α
for␈α∞humans,␈α∞but␈α∞expressing␈α
reasonably
␈↓ α∧␈↓briefly␈αwhat␈αis␈αactually␈αknown␈αabout␈αthe␈αstate␈αof␈αa␈αmachine␈αin␈αa␈αparticular␈αsituation␈αmay␈αrequire
␈↓ α∧␈↓ascribing␈α∂mental␈α∞qualities␈α∂or␈α∞qualities␈α∂isomorphic␈α∞to␈α∂them␈↓∧1␈↓.␈α∞ Theories␈α∂of␈α∞belief,␈α∂knowledge␈α∞and
␈↓ α∧␈↓wanting␈αcan␈αbe␈αconstructed␈α
for␈αmachines␈αin␈αa␈αsimpler␈α
setting␈αthan␈αfor␈αhumans␈αand␈α
later␈αapplied
␈↓ α∧␈↓to␈α∩humans.␈α∩ Ascription␈α∩of␈α∩mental␈α∪qualities␈α∩is␈α∩␈↓αmost␈α∩straightforward␈↓␈α∩for␈α∩machines␈α∪of␈α∩known
␈↓ α∧␈↓structure␈αsuch␈αas␈α
thermostats␈αand␈αcomputer␈αoperating␈α
systems,␈αbut␈αis␈α
␈↓αmost␈αuseful␈↓␈αwhen␈αapplied␈α
to
␈↓ α∧␈↓entities whose structure is very incompletely known.

␈↓ α∧␈↓␈↓ αTThese␈α
views␈α
are␈α
motivated␈α
by␈α
work␈αin␈α
artificial␈α
intelligence␈↓∧2␈↓␈α
(abbreviated␈α
AI).␈α
 They␈αcan␈α
be
␈↓ α∧␈↓taken␈α
as␈α∞asserting␈α
that␈α∞many␈α
of␈α
the␈α∞philosophical␈α
problems␈α∞of␈α
mind␈α
take␈α∞a␈α
concrete␈α∞form␈α
when
␈↓ α∧␈↓one␈αtakes␈αseriously␈αthe␈αidea␈αof␈αmaking␈αmachines␈αbehave␈αintelligently.␈α In␈αparticular,␈αAI␈αraises␈αfor
␈↓ α∧␈↓machines two issues that have heretofore been considered only in connection with people.

␈↓ α∧␈↓␈↓ αTFirst,␈α∞in␈α∞designing␈α∞intelligent␈α∞programs␈α∞and␈α∞looking␈α∞at␈α∞them␈α∞from␈α∞the␈α∞outside␈α∞we␈α∞need␈α
to
␈↓ α∧␈↓determine␈α
the␈α
conditions␈α
under␈αwhich␈α
specific␈α
mental␈α
and␈αvolitional␈α
terms␈α
are␈α
applicable.␈α We␈α
can
␈↓ α∧␈↓exemplify␈α∂these␈α∂problems␈α⊂by␈α∂asking␈α∂when␈α∂might␈α⊂it␈α∂be␈α∂legitimate␈α∂to␈α⊂say␈α∂about␈α∂a␈α∂machine,␈α⊂␈↓↓"␈α∂It
␈↓ α∧␈↓↓knows I want a reservation to Boston, and it can give it to me, but it won't"␈↓.

␈↓ α∧␈↓␈↓ αTSecond,␈αwhen␈αwe␈αwant␈αa␈α␈↓αgenerally␈αintelligent␈↓␈↓∧3␈↓␈αcomputer␈αprogram,␈αwe␈αmust␈αbuild␈αinto␈αit␈αa
␈↓ α∧␈↓␈↓αgeneral␈αview␈↓␈αof␈αwhat␈αthe␈αworld␈αis␈αlike␈αwith␈αespecial␈αattention␈αto␈αfacts␈αabout␈αhow␈αthe␈αinformation
␈↓ α∧␈↓required␈αto␈αsolve␈αproblems␈αis␈αto␈αbe␈αobtained␈αand␈αused.␈α Thus␈αwe␈αmust␈αprovide␈αit␈αwith␈αsome␈αkind
␈↓ α∧␈↓of ␈↓↓metaphysics␈↓ (general world-view) and ␈↓↓epistemology␈↓ (theory of knowledge) however naive.

␈↓ α∧␈↓␈↓ αTAs␈αmuch␈αas␈αpossible,␈αwe␈αwill␈αascribe␈αmental␈αqualities␈αseparately␈αfrom␈αeach␈αother␈αinstead␈αof
␈↓ α∧␈↓bundling␈α∞them␈α∞in␈α∞a␈α∞concept␈α∞of␈α∂mind.␈α∞ This␈α∞is␈α∞necessary,␈α∞because␈α∞present␈α∞machines␈α∂have␈α∞rather
␈↓ α∧␈↓varied␈α∂little␈α∂minds;␈α∂the␈α∂mental␈α∂qualities␈α∂that␈α∂can␈α∂legitimately␈α∂be␈α∂ascribed␈α∂to␈α∂them␈α∂are␈α∂few␈α∞and
␈↓ α∧␈↓differ␈αfrom␈αmachine␈αto␈αmachine.␈α We␈αwill␈αnot␈α
even␈αtry␈αto␈αmeet␈αobjections␈αlike,␈α␈↓↓"Unless␈αit␈αalso␈α
does
␈↓ α∧␈↓↓X, it is illegitimate to speak of its having mental qualities."␈↓

␈↓ α∧␈↓␈↓ αTMachines␈αas␈αsimple␈αas␈αthermostats␈αcan␈αbe␈αsaid␈αto␈αhave␈αbeliefs,␈αand␈αhaving␈αbeliefs␈αseems␈αto
␈↓ α∧␈↓be␈α⊃a␈α⊃characteristic␈α⊃of␈α⊃most␈α⊃machines␈α⊃capable␈α⊃of␈α⊃problem␈α⊃solving␈α⊃performance.␈α⊃ However,␈α⊂the
␈↓ α∧␈↓machines␈α∩mankind␈α∩has␈α⊃so␈α∩far␈α∩found␈α∩it␈α⊃useful␈α∩to␈α∩construct␈α⊃rarely␈α∩have␈α∩beliefs␈α∩about␈α⊃beliefs,
␈↓ α∧␈↓although␈α∞such␈α∞beliefs␈α∞will␈α∞be␈α∞needed␈α∞by␈α∞computer␈α∞programs␈α∞that␈α∞reason␈α∞about␈α∂what␈α∞knowledge
␈↓ α∧␈↓they␈α
lack␈α
and␈α∞where␈α
to␈α
get␈α∞it.␈α
 Mental␈α
qualities␈α∞peculiar␈α
to␈α
human-like␈α∞motivational␈α
structures␈↓∧4␈↓,
␈↓ α∧␈↓such␈α⊃as␈α⊃love␈α⊃and␈α∩hate,␈α⊃will␈α⊃not␈α⊃be␈α⊃required␈α∩for␈α⊃intelligent␈α⊃behavior,␈α⊃but␈α⊃we␈α∩could␈α⊃probably
␈↓ α∧␈↓program␈α∂computers␈α∞to␈α∂exhibit␈α∞them␈α∂if␈α∂we␈α∞wanted␈α∂to,␈α∞because␈α∂our␈α∞common␈α∂sense␈α∂notions␈α∞about
␈↓ α∧␈↓them␈α
translate␈α
readily␈α
into␈α
certain␈α
program␈α
and␈α
data␈α
structures.␈α
 Still␈α
other␈α
mental␈α∞qualities,␈α
e.g.
␈↓ α∧␈↓humor␈αand␈α
appreciation␈αof␈αbeauty,␈α
seem␈αmuch␈αharder␈α
to␈αmodel.␈α
 While␈αwe␈αwill␈α
be␈αquite␈αliberal␈α
in
␈↓ α∧␈↓ascribing␈α
␈↓↓some␈↓␈αmental␈α
qualities␈αeven␈α
to␈αrather␈α
primitive␈αmachines,␈α
we␈αwill␈α
try␈αto␈α
be␈αconservative
␈↓ α∧␈↓in our criteria for ascribing any ␈↓↓particular␈↓ quality.

␈↓ α∧␈↓␈↓ αTThe␈α
successive␈α
sections␈α
of␈α
this␈α
paper␈α∞will␈α
give␈α
philosophical␈α
and␈α
AI␈α
reasons␈α∞for␈α
ascribing
␈↓ α∧␈↓beliefs␈αto␈αmachines,␈αtwo␈αnew␈αforms␈αof␈αdefinition␈αthat␈αseem␈αnecessary␈αfor␈αdefining␈αmental␈αqualities
␈↓ α∧␈↓␈↓ u3


␈↓ α∧␈↓and␈αexamples␈αof␈αtheir␈αuse,␈αexamples␈αof␈αsystems␈αto␈αwhich␈αmental␈αqualities␈αare␈αascribed,␈αsome␈αfirst
␈↓ α∧␈↓attempts␈α⊃at␈α⊃defining␈α⊃a␈α⊂variety␈α⊃of␈α⊃mental␈α⊃qualities,␈α⊃some␈α⊂comments␈α⊃on␈α⊃other␈α⊃views␈α⊃on␈α⊂mental
␈↓ α∧␈↓qualities, notes, and references.

␈↓ α∧␈↓␈↓ αTThis␈α∩paper␈α∩is␈α∩exploratory␈α∩and␈α∩its␈α⊃presentation␈α∩is␈α∩non-technical.␈α∩ Any␈α∩axioms␈α∩that␈α⊃are
␈↓ α∧␈↓presented␈αare␈αillustrative␈αand␈αnot␈αpart␈αof␈αan␈αaxiomatic␈αsystem␈αproposed␈αas␈αa␈αserious␈αcandidate␈αfor
␈↓ α∧␈↓AI␈α⊂or␈α⊂philosophical␈α⊂use.␈α⊂ This␈α⊂is␈α⊂regrettable␈α⊃for␈α⊂two␈α⊂reasons.␈α⊂ First,␈α⊂AI␈α⊂use␈α⊂of␈α⊃these␈α⊂concepts
␈↓ α∧␈↓requires␈α
formal␈αaxiomatization.␈α
 Second,␈αthe␈α
lack␈α
of␈αformalism␈α
focusses␈αattention␈α
on␈α
whether␈αthe
␈↓ α∧␈↓paper␈αcorrectly␈αcharacterizes␈αmental␈αqualities␈αrather␈αthan␈αon␈αthe␈αformal␈αproperties␈αof␈αthe␈αtheories
␈↓ α∧␈↓proposed.␈α I␈αthink␈αwe␈αcan␈αattain␈αa␈αsituation␈αlike␈αthat␈αin␈αthe␈αfoundations␈αof␈αmathematics,␈αwherein
␈↓ α∧␈↓the␈α
controversies␈αabout␈α
whether␈αto␈α
take␈αan␈α
intuitionist␈αor␈α
classical␈αpoint␈α
of␈αview␈α
have␈αbeen␈α
mainly
␈↓ α∧␈↓replaced␈αby␈αtechnical␈αstudies␈αof␈αintuitionist␈αand␈αclassical␈αtheories␈αand␈αthe␈αrelations␈αbetween␈αthem.
␈↓ α∧␈↓In␈α
future␈α
work,␈α
I␈α
hope␈α
to␈α
treat␈α∞these␈α
matters␈α
more␈α
formally␈α
along␈α
the␈α
lines␈α
of␈α∞(McCarthy␈α
1977a
␈↓ α∧␈↓and␈α∂1977b).␈α∂ This␈α∂won't␈α∂eliminate␈α∂controversy␈α∂about␈α∂the␈α∂true␈α∂nature␈α∂of␈α∂mental␈α∂qualities,␈α∂but␈α∞I
␈↓ α∧␈↓believe that their eventual resolution requires more technical knowledge than is now available.
␈↓ α∧␈↓␈↓ u4


␈↓ α∧␈↓α␈↓ ∧nWHY ASCRIBE MENTAL QUALITIES?

␈↓ α∧␈↓␈↓ αT␈↓αWhy␈αshould␈αwe␈αwant␈αto␈αascribe␈αbeliefs␈α
to␈αmachines␈αat␈αall?␈↓␈αThis␈αis␈αthe␈α
converse␈αquestion
␈↓ α∧␈↓to␈α
that␈αof␈α
␈↓↓reductionism␈↓.␈α
 Instead␈αof␈α
asking␈αhow␈α
mental␈α
qualities␈αcan␈α
be␈α␈↓αreduced␈↓␈α
to␈α
physical␈αones,
␈↓ α∧␈↓we ask how to ␈↓αascribe␈↓ mental qualities to physical systems.

␈↓ α∧␈↓␈↓ αTOur␈αgeneral␈α
motivation␈αfor␈α
ascribing␈αmental␈α
qualities␈αis␈α
the␈αsame␈α
as␈αfor␈α
ascribing␈αany␈α
other
␈↓ α∧␈↓qualities␈α
-␈αnamely␈α
to␈α
express␈αavailable␈α
information␈α
about␈αthe␈α
machine␈α
and␈αits␈α
current␈α
state.␈α To
␈↓ α∧␈↓have␈αinformation,␈αwe␈αmust␈αhave␈αa␈αspace␈αof␈αpossibilities␈αwhether␈αexplicitly␈αdescribed␈αor␈αnot.␈α The
␈↓ α∧␈↓ascription␈αmust␈αtherefore␈αmust␈αserve␈αto␈αdistinguish␈αthe␈αpresent␈αstate␈αof␈αthe␈αmachine␈αfrom␈αpast␈αor
␈↓ α∧␈↓future␈α
states␈α
or␈α
from␈α
the␈α
state␈α
the␈α
machine␈α
would␈α
have␈α
in␈α
other␈α
conditions␈α
or␈α
from␈α
the␈α∞state␈α
of
␈↓ α∧␈↓other␈α∩machines.␈α∩ Therefore,␈α∩the␈α∩issue␈α∩is␈α∩whether␈α∩ascription␈α∩of␈α∩mental␈α∩qualities␈α∩is␈α∩helpful␈α⊃in
␈↓ α∧␈↓making these discriminations in the case of machines.

␈↓ α∧␈↓␈↓ αTTo␈α⊃put␈α⊃the␈α⊂issue␈α⊃sharply,␈α⊃consider␈α⊂a␈α⊃computer␈α⊃program␈α⊂for␈α⊃which␈α⊃we␈α⊃possess␈α⊂complete
␈↓ α∧␈↓listings.␈α
 The␈α
behavior␈α
of␈α
the␈α
program␈α
in␈α
any␈α
environment␈α
is␈α
determined␈α
from␈α
the␈α
structure␈α
of␈α
the
␈↓ α∧␈↓program␈α∂and␈α∞can␈α∂be␈α∂found␈α∞out␈α∂by␈α∞simulating␈α∂the␈α∂action␈α∞of␈α∂the␈α∞program␈α∂and␈α∂the␈α∞environment
␈↓ α∧␈↓without␈α∂having␈α∂to␈α∂deal␈α∂with␈α∂any␈α∂concept␈α∂of␈α∂belief.␈α∂ Nevertheless,␈α∂there␈α∂are␈α∂several␈α∂reasons␈α∞for
␈↓ α∧␈↓ascribing belief and other mental qualities:

␈↓ α∧␈↓␈↓ αT1.␈α
Although␈αwe␈α
may␈αknow␈α
the␈αprogram,␈α
its␈αstate␈α
at␈αa␈α
given␈αmoment␈α
is␈αusually␈α
not␈αdirectly
␈↓ α∧␈↓observable,␈αand␈αthe␈αfacts␈αwe␈αcan␈αobtain␈αabout␈αits␈αcurrent␈αstate␈αmay␈αbe␈αmore␈αreadily␈αexpressed␈αby
␈↓ α∧␈↓ascribing certain beliefs and goals than in any other way.

␈↓ α∧␈↓␈↓ αT2.␈α∀Even␈α∀if␈α∀we␈α∀can␈α∀simulate␈α∀its␈α∀interaction␈α∀with␈α∀its␈α∀environment␈α∀using␈α∃another␈α∀more
␈↓ α∧␈↓comprehensive␈α
program,␈α
the␈α
simulation␈α
may␈αbe␈α
a␈α
billion␈α
times␈α
too␈αslow.␈α
 We␈α
also␈α
may␈α
not␈αhave
␈↓ α∧␈↓the␈αinitial␈αconditions␈αof␈αthe␈αenvironment␈αor␈αthe␈αenvironment's␈αlaws␈αof␈αmotion␈αin␈αa␈αsuitable␈αform,
␈↓ α∧␈↓whereas␈α∂it␈α∞may␈α∂be␈α∞feasible␈α∂to␈α∞make␈α∂a␈α∞prediction␈α∂of␈α∞the␈α∂effects␈α∞of␈α∂the␈α∞beliefs␈α∂we␈α∞ascribe␈α∂to␈α∞the
␈↓ α∧␈↓program without any computer at all.

␈↓ α∧␈↓␈↓ αT3.␈α∂Ascribing␈α∂beliefs␈α∞may␈α∂allow␈α∂deriving␈α∂general␈α∞statements␈α∂about␈α∂the␈α∂program's␈α∞behavior
␈↓ α∧␈↓that could not be obtained from any finite number of simulations.

␈↓ α∧␈↓␈↓ αT4.␈α
 The␈α
belief␈α
and␈α
goal␈α
structures␈α
we␈α∞ascribe␈α
to␈α
the␈α
program␈α
may␈α
be␈α
easier␈α∞to␈α
understand
␈↓ α∧␈↓than the details of program as expressed in its listing.

␈↓ α∧␈↓␈↓ αT5.␈α∂The␈α∂belief␈α∞and␈α∂goal␈α∂structure␈α∞is␈α∂likely␈α∂to␈α∂be␈α∞close␈α∂to␈α∂the␈α∞structure␈α∂the␈α∂designer␈α∂of␈α∞the
␈↓ α∧␈↓program␈αhad␈αin␈αmind,␈αand␈αit␈αmay␈αbe␈αeasier␈αto␈αdebug␈αthe␈αprogram␈αin␈αterms␈αof␈αthis␈αstructure␈αthan
␈↓ α∧␈↓directly␈αfrom␈αthe␈αlisting.␈α
 In␈αfact,␈αit␈αis␈αoften␈α
possible␈αfor␈αsomeone␈αto␈α
correct␈αa␈αfault␈αby␈αreasoning␈α
in
␈↓ α∧␈↓general␈α∞terms␈α∂about␈α∞the␈α∂information␈α∞in␈α∞a␈α∂program␈α∞or␈α∂machine,␈α∞diagnosing␈α∞what␈α∂is␈α∞wrong␈α∂as␈α∞a
␈↓ α∧␈↓false␈α
belief,␈α
and␈α
looking␈α∞at␈α
the␈α
details␈α
of␈α
the␈α∞program␈α
or␈α
machine␈α
only␈α
sufficiently␈α∞to␈α
determine
␈↓ α∧␈↓how the false belief is represented and what mechanism caused it to arise.

␈↓ α∧␈↓␈↓ αT6.␈α∞The␈α∞difference␈α∞between␈α∞this␈α∞program␈α
and␈α∞another␈α∞actual␈α∞or␈α∞hypothetical␈α∞program␈α
may
␈↓ α∧␈↓best be expressed as a difference in belief structure.

␈↓ α∧␈↓␈↓ αTAll␈α
the␈α
above␈α
reasons␈α
for␈α
ascribing␈α
beliefs␈α
are␈α
epistemological;␈α
i.e.␈α
ascribing␈α
beliefs␈α
is␈α
needed
␈↓ α∧␈↓to␈α∞adapt␈α∞to␈α∂limitations␈α∞on␈α∞our␈α∞ability␈α∂to␈α∞acquire␈α∞knowledge,␈α∞use␈α∂it␈α∞for␈α∞prediction,␈α∂and␈α∞establish
␈↓ α∧␈↓generalizations␈α
in␈α
terms␈α
of␈α
the␈α
elementary␈α
structure␈αof␈α
the␈α
program.␈α
 Perhaps␈α
this␈α
is␈α
the␈αgeneral
␈↓ α∧␈↓reason for ascribing higher levels of organization to systems.
␈↓ α∧␈↓␈↓ u5


␈↓ α∧␈↓␈↓ αTComputers␈αgive␈αrise␈αto␈αnumerous␈αexamples␈αof␈αbuilding␈αa␈αhigher␈αstructure␈αon␈αthe␈αbasis␈αof␈αa
␈↓ α∧␈↓lower␈α
and␈α
conducting␈α
subsequent␈α
analyses␈α
using␈α
the␈α
higher␈α
structure.␈α
 The␈α
geometry␈α
of␈αthe␈α
electric
␈↓ α∧␈↓fields␈α
in␈αa␈α
transistor␈α
and␈αits␈α
chemical␈α
composition␈αgive␈α
rise␈αto␈α
its␈α
properties␈αas␈α
an␈α
electric␈αcircuit
␈↓ α∧␈↓element.␈α
 Transistors␈α
are␈αcombined␈α
in␈α
small␈αcircuits␈α
and␈α
powered␈α
in␈αstandard␈α
ways␈α
to␈αmake␈α
logical
␈↓ α∧␈↓elements␈α
such␈α
as␈α
ANDs,␈α
ORs,␈α
NOTs␈α
and␈α
flip-flops.␈α
 Computers␈α
are␈α
designed␈α
with␈α∞these␈α
logical
␈↓ α∧␈↓elements␈αto␈αobey␈αa␈αdesired␈αorder␈αcode;␈αthe␈αdesigner␈αusually␈αneedn't␈αconsider␈αthe␈αproperties␈α
of␈αthe
␈↓ α∧␈↓transistors␈αas␈αcircuit␈αelements.␈α When␈αwriting␈αa␈αcompiler␈αfrom␈αa␈αhigher␈αlevel␈αlanguage,␈αone␈αworks
␈↓ α∧␈↓with␈αthe␈αorder␈αcode␈αand␈αdoesn't␈αhave␈αto␈αknow␈αabout␈αthe␈αANDs␈αand␈αORs;␈αthe␈αuser␈αof␈αthe␈αhigher
␈↓ α∧␈↓order language needn't know the computer's order code.

␈↓ α∧␈↓␈↓ αTIn␈αthe␈αabove␈αcases,␈α
users␈αof␈αthe␈αhigher␈αlevel␈α
can␈αcompletely␈αignore␈αthe␈αlower␈α
level,␈αbecause
␈↓ α∧␈↓the␈αbehavior␈αof␈αthe␈αhigher␈α
level␈αsystem␈αis␈αcompletely␈αdetermined␈α
by␈αthe␈αvalues␈αof␈αthe␈αhigher␈α
level
␈↓ α∧␈↓variables;␈α
e.g.␈α in␈α
order␈αto␈α
determine␈αthe␈α
outcome␈αof␈α
a␈αcomputer␈α
program,␈αone␈α
needn't␈αconsider␈α
the
␈↓ α∧␈↓flip-flops.␈α
 However,␈α
when␈α
we␈α
ascribe␈α
mental␈α
structure␈α
to␈α
humans␈α
or␈α
goals␈α
to␈α
society,␈α∞we␈α
always
␈↓ α∧␈↓get␈α∞highly␈α∞incomplete␈α∂systems;␈α∞the␈α∞higher␈α∂level␈α∞behavior␈α∞cannot␈α∂be␈α∞fully␈α∞predicted␈α∂from␈α∞higher
␈↓ α∧␈↓level␈α⊃observations␈α⊃and␈α⊃higher␈α⊂level␈α⊃"laws"␈α⊃even␈α⊃when␈α⊂the␈α⊃underlying␈α⊃lower␈α⊃level␈α⊃behavior␈α⊂is
␈↓ α∧␈↓determinate.␈α Moreover,␈αat␈αa␈αgiven␈αstate␈αof␈αscience␈αand␈αtechnology,␈αdifferent␈αkinds␈αof␈αinformation
␈↓ α∧␈↓can be obtained from experiment and theory building at the different levels of organization.

␈↓ α∧␈↓␈↓ αTIn␈α
order␈α
to␈α
program␈α
a␈α
computer␈α
to␈α
obtain␈α
information␈α
and␈α
co-operation␈α
from␈α∞people␈α
and
␈↓ α∧␈↓other␈α
machines,␈α
we␈α
will␈α
have␈α
to␈α
make␈α
it␈α
ascribe␈α
knowledge,␈α
belief,␈α
and␈α
wants␈α
to␈α∞other␈α
machines
␈↓ α∧␈↓and␈α∞people.␈α∂ For␈α∞example,␈α∂a␈α∞program␈α∂that␈α∞plans␈α∂trips␈α∞will␈α∂have␈α∞to␈α∂ascribe␈α∞knowledge␈α∂to␈α∞travel
␈↓ α∧␈↓agents␈αand␈αto␈αthe␈αairline␈αreservation␈αcomputers.␈α It␈αmust␈αsomehow␈αtreat␈αthe␈αinformation␈αin␈αbooks,
␈↓ α∧␈↓perhaps␈αby␈α
ascribing␈αto␈α
them␈αa␈αpassive␈α
form␈αof␈α
knowledge.␈α The␈αmore␈α
powerful␈αthe␈α
program␈αin
␈↓ α∧␈↓interpreting␈α∞what␈α
it␈α∞is␈α
told,␈α∞the␈α
less␈α∞it␈α
has␈α∞to␈α
know␈α∞about␈α
how␈α∞the␈α
information␈α∞it␈α
can␈α∞receive␈α
is
␈↓ α∧␈↓represented␈αinternally␈αin␈αthe␈αteller␈αand␈αthe␈αmore␈αits␈αascriptions␈αof␈αknowledge␈αwill␈αlook␈αlike␈αhuman
␈↓ α∧␈↓ascriptions of knowledge to other humans.
␈↓ α∧␈↓␈↓ u6


␈↓ α∧␈↓αTWO␈α METHODS␈α OF␈α DEFINITION␈α AND␈α THEIR␈α APPLICATION␈α TO␈α∨MENTAL
␈↓ α∧␈↓αQUALITIES

␈↓ α∧␈↓␈↓ αTIn␈αour␈α
opinion,␈αa␈αmajor␈α
source␈αof␈αproblems␈α
in␈αdefining␈αmental␈α
and␈αintensional␈α
concepts␈αis
␈↓ α∧␈↓the␈α
weakness␈αof␈α
the␈α
methods␈αof␈α
definition␈α
that␈αhave␈α
been␈α
␈↓↓explicitly␈↓␈αused.␈α
 We␈α
introduce␈αtwo␈α
kinds
␈↓ α∧␈↓of␈α∂definition:␈α∂␈↓↓definition␈α∂relative␈α∂to␈α∂an␈α∞approximate␈α∂theory␈↓␈α∂and␈α∂␈↓↓second␈α∂order␈α∂structural␈α∞definition␈↓
␈↓ α∧␈↓and apply them to defining mental qualities.

␈↓ α∧␈↓1. ␈↓αDefinitions relative to an approximate theory␈↓.

␈↓ α∧␈↓␈↓ αTIt␈α⊂is␈α⊂commonplace␈α⊃that␈α⊂most␈α⊂scientific␈α⊃concepts␈α⊂are␈α⊂not␈α⊃defined␈α⊂by␈α⊂isolated␈α⊃sentences␈α⊂of
␈↓ α∧␈↓natural␈αlanguages␈αbut␈αrather␈αas␈αparts␈αof␈α
theories,␈αand␈αthe␈αacceptance␈αof␈αthe␈αtheory␈α
is␈αdetermined
␈↓ α∧␈↓by␈αits␈αfit␈αto␈α
a␈αlarge␈αcollection␈αof␈α
phenomena.␈α We␈αpropose␈αa␈α
similar␈αmethod␈αfor␈αexplicating␈α
mental
␈↓ α∧␈↓and␈α∞other␈α
common␈α∞sense␈α∞concepts,␈α
but␈α∞a␈α∞certain␈α
phenomenon␈α∞plays␈α∞a␈α
more␈α∞important␈α∞role␈α
than
␈↓ α∧␈↓with␈αscientific␈αtheories:␈αthe␈αconcept␈αis␈αmeaningful␈αonly␈αin␈αthe␈αtheory,␈αand␈αcannot␈αbe␈αdefined␈αwith
␈↓ α∧␈↓more precision than the theory permits.

␈↓ α∧␈↓␈↓ αTThe␈α∞notion␈α∞of␈α∞one␈α∂theory␈α∞approximating␈α∞another␈α∞needs␈α∞to␈α∂be␈α∞formalized.␈α∞ In␈α∞the␈α∂case␈α∞of
␈↓ α∧␈↓physics,␈αone␈αcan␈αthink␈αof␈αvarious␈αkinds␈αof␈αnumerical␈αor␈αprobabilistic␈αapproximation.␈α I␈αthink␈αthis
␈↓ α∧␈↓kind␈α⊂of␈α∂approximation␈α⊂is␈α⊂untypical␈α∂and␈α⊂misleading␈α∂and␈α⊂won't␈α⊂help␈α∂explicate␈α⊂such␈α⊂concepts␈α∂as
␈↓ α∧␈↓␈↓↓intentional action␈↓ as meaningful in approximate theories.  Instead it may go something like this:

␈↓ α∧␈↓␈↓ αTConsider␈α
a␈α
detailed␈α
theory␈α
␈↓↓T␈↓␈αthat␈α
has␈α
a␈α
state␈α
variable␈α␈↓↓s.␈↓␈α
We␈α
may␈α
imagine␈α
that␈α
␈↓↓s␈↓␈αchanges
␈↓ α∧␈↓with␈αtime.␈α The␈αapproximating␈αtheory␈α␈↓↓T'␈↓␈αhas␈αa␈αstate␈αvariable␈α␈↓↓s'.␈↓␈αThere␈αis␈αa␈αpredicate␈α␈↓↓atp(s,T')␈↓
␈↓ α∧␈↓whose␈α
truth␈α
means␈α
that␈α
␈↓↓T'␈↓␈α
is␈α
applicable␈α
when␈α
the␈α
world␈α
is␈α
in␈α
state␈α
␈↓↓s.␈↓␈α
There␈α
is␈α
a␈α
relation␈α
␈↓↓corr(s,s')␈↓
␈↓ α∧␈↓which asserts that ␈↓↓s'␈↓ corresponds to the state ␈↓↓s. We␈↓ have

␈↓ α∧␈↓1)␈↓ αt ␈↓↓∀s.(atp(s,T') ⊃ ∃s'.corr(s,s'))␈↓.

␈↓ α∧␈↓Certain␈α∞functions␈α∞␈↓↓f1(s),␈α∞f2(s),␈↓␈α∞etc.␈α∞have␈α∞corresponding␈α∞functions␈α∞␈↓↓f1'(s'),␈α∞f2'(s')␈↓,␈α∞etc.␈α∞ We␈α∞have
␈↓ α∧␈↓relations like

␈↓ α∧␈↓2)␈↓ αt ␈↓↓∀s s'.(corr(s,s') ⊃ f1(s) = f1'(s'))␈↓.

␈↓ α∧␈↓However,␈α⊃the␈α⊃approximate␈α⊃theory␈α⊂␈↓↓T'␈↓␈α⊃may␈α⊃have␈α⊃additional␈α⊂functions␈α⊃␈↓↓g1'(s')␈↓,␈α⊃etc.␈α⊃that␈α⊃do␈α⊂not
␈↓ α∧␈↓correspond␈α
to␈α∞any␈α
functions␈α
of␈α∞␈↓↓s.␈↓␈α
Even␈α∞when␈α
it␈α
is␈α∞possible␈α
to␈α
construct␈α∞␈↓↓g␈↓s␈α
corresponding␈α∞to␈α
the
␈↓ α∧␈↓␈↓↓g'␈↓s,␈α∞their␈α∞definitions␈α∞will␈α∞often␈α∞seem␈α∞arbitrary,␈α∞because␈α∞the␈α∞common␈α∞sense␈α∞user␈α∞of␈α∞␈↓↓g1'␈↓␈α∞will␈α∞only
␈↓ α∧␈↓have used it within the context of ␈↓↓T'␈↓

␈↓ α∧␈↓Concepts whose definition involves counterfactuals provide examples.

␈↓ α∧␈↓␈↓ αTSuppose␈α∞we␈α∞want␈α
to␈α∞ascribe␈α∞␈↓↓intentions␈↓␈α∞and␈α
␈↓↓free␈α∞will␈↓␈α∞and␈α
to␈α∞distinguish␈α∞a␈α∞␈↓↓deliberate␈α
action␈↓
␈↓ α∧␈↓from␈αan␈αoccurrence.␈α We␈αwant␈αto␈αcall␈αan␈αoutput␈αa␈α␈↓↓deliberate␈αaction␈↓␈αif␈αthe␈αoutput␈αwould␈αhave␈αbeen
␈↓ α∧␈↓different␈αif␈αthe␈αmachine's␈αintentions␈α
had␈αbeen␈αdifferent.␈α This␈αrequires␈α
a␈αcriterion␈αfor␈αthe␈αtruth␈α
of
␈↓ α∧␈↓the␈α∂counterfactual␈α∂conditional␈α∂sentence␈α∂␈↓↓If␈α⊂its␈α∂intentions␈α∂had␈α∂been␈α∂different␈α∂the␈α⊂output␈α∂wouldn't
␈↓ α∧␈↓↓have occurred␈↓, and we require what seems to be a novel treatment of counterfactuals.

␈↓ α∧␈↓␈↓ αTWe␈α
treat␈α
the␈α
"relevant␈α
aspect␈α
of␈α
reality"␈α∞as␈α
a␈α
Cartesian␈α
product␈α
so␈α
that␈α
we␈α
can␈α∞talk␈α
about
␈↓ α∧␈↓changing␈α
one␈α
component␈α
and␈α
leaving␈α
the␈α
others␈α
unchanged.␈α
 This␈α
would␈α
be␈α
straightforward␈α
if␈α
the
␈↓ α∧␈↓␈↓ u7


␈↓ α∧␈↓Cartesian␈α∪product␈α∀structure␈α∪existed␈α∪in␈α∀the␈α∪world;␈α∀however,␈α∪it␈α∪usually␈α∀exists␈α∪only␈α∀in␈α∪certain
␈↓ α∧␈↓approximate␈αmodels␈α
of␈αthe␈α
world.␈α Consequently␈αno␈α
single␈αdefinite␈α
state␈αof␈αthe␈α
world␈αas␈α
a␈αwhole
␈↓ α∧␈↓corresponds␈αto␈αchanging␈αone␈αcomponent.␈α The␈α
following␈αparagraphs␈αpresent␈αthese␈αideas␈αin␈α
greater
␈↓ α∧␈↓detail.

␈↓ α∧␈↓␈↓ αTSuppose␈α
␈↓↓A␈↓␈α
is␈α
a␈αtheory␈α
in␈α
which␈α
some␈αaspect␈α
of␈α
reality␈α
is␈αcharacterized␈α
by␈α
the␈α
values␈αof␈α
three
␈↓ α∧␈↓quantities␈α∂␈↓↓x,␈↓␈α∂␈↓↓y␈↓␈α⊂and␈α∂␈↓↓z␈↓.␈α∂ Let␈α∂␈↓↓f␈↓␈α⊂be␈α∂a␈α∂function␈α⊂of␈α∂three␈α∂arguments,␈α∂let␈α⊂␈↓↓u␈↓␈α∂be␈α∂a␈α⊂quantity␈α∂satisfying
␈↓ α∧␈↓␈↓↓u = f(x,y,z)␈↓,␈αwhere␈α␈↓↓f(1,1,1) = 3␈↓␈αand␈α␈↓↓f(2,1,1) = 5␈↓.␈α Consider␈αa␈αstate␈αof␈αthe␈αmodel␈αin␈αwhich␈α␈↓↓x = 1␈↓,
␈↓ α∧␈↓␈↓↓y = 1␈↓␈α∞and␈α∞␈↓↓z = 1␈↓.␈α∞ Within␈α∞the␈α∞theory␈α∞␈↓↓A,␈↓␈α∞the␈α∞counterfactual␈α∞conditional␈α∞sentence␈α∞␈↓↓"u = 3,␈α∞but␈α∞if␈α
x
␈↓ α∧␈↓↓were␈α
2,␈α∞then␈α
u␈α
would␈α∞be␈α
5"␈↓␈α
is␈α∞true,␈α
because␈α
the␈α∞counterfactual␈α
condition␈α
means␈α∞changing␈α
␈↓↓x␈↓␈α∞to␈α
2
␈↓ α∧␈↓and leaving the other variables unchanged.

␈↓ α∧␈↓␈↓ αTNow␈α
let's␈α
go␈α
beyond␈α
the␈α
model␈α
and␈α
suppose␈αthat␈α
␈↓↓x,␈↓␈α
␈↓↓y␈↓␈α
and␈α
␈↓↓z␈↓␈α
are␈α
quantities␈α
depending␈αon␈α
the
␈↓ α∧␈↓state␈α
of␈α
the␈α
world.␈α
 Even␈α
if␈α
␈↓↓u = f(x,y,z)␈↓␈α
is␈α
taken␈α
as␈α
a␈α
law␈α
of␈α
nature,␈α
the␈α
counterfactual␈α
need␈α
not␈α
be
␈↓ α∧␈↓taken␈αas␈αtrue,␈α
because␈αsomeone␈αmight␈α
argue␈αthat␈αif␈α
␈↓↓x␈↓␈αwere␈α2,␈αthen␈α
␈↓↓y␈↓␈αwould␈αbe␈α
3␈αso␈αthat␈α
␈↓↓u␈↓␈αmight
␈↓ α∧␈↓not␈α∂be␈α∂5.␈α∂ If␈α∂the␈α∂theory␈α∞␈↓↓A␈↓␈α∂has␈α∂a␈α∂sufficiently␈α∂preferred␈α∂status␈α∞we␈α∂may␈α∂take␈α∂the␈α∂meaning␈α∂of␈α∞the
␈↓ α∧␈↓counterfactual␈α∂in␈α∂␈↓↓A␈↓␈α⊂to␈α∂be␈α∂its␈α∂general␈α⊂meaning,␈α∂but␈α∂it␈α∂may␈α⊂sometimes␈α∂be␈α∂better␈α∂to␈α⊂consider␈α∂the
␈↓ α∧␈↓counterfactual as defined solely in the theory, i.e. as ␈↓↓syncategorematic␈↓.

␈↓ α∧␈↓␈↓ αTA␈αcommon␈αsense␈αexample␈αmay␈αbe␈αhelpful:␈αSuppose␈αa␈αski␈αinstructor␈αsays,␈α␈↓↓"He␈αwouldn't␈αhave
␈↓ α∧␈↓↓fallen␈αif␈αhe␈αhad␈αbent␈αhis␈αknees␈αwhen␈α
he␈αmade␈αthat␈αturn"␈↓,␈αand␈αanother␈αinstructor␈αreplies,␈α
␈↓↓"No,␈αthe
␈↓ α∧␈↓↓reason␈α∞he␈α∞fell␈α∞was␈α∞that␈α∞he␈α∞didn't␈α∞put␈α∞his␈α∞weight␈α∞on␈α∞his␈α∞downhill␈α∞ski"␈↓.␈α∞ Suppose␈α∞further␈α∞that␈α∞on
␈↓ α∧␈↓reviewing␈α⊂a␈α⊃film,␈α⊂they␈α⊃agree␈α⊂that␈α⊂the␈α⊃first␈α⊂instructor␈α⊃was␈α⊂correct␈α⊂and␈α⊃the␈α⊂second␈α⊃mistaken.␈α⊂ I
␈↓ α∧␈↓contend␈αthat␈αthis␈αagreement␈αis␈αbased␈αon␈αtheir␈αcommon␈αacceptance␈αof␈αa␈αtheory␈αof␈αskiing,␈αand␈αthat
␈↓ α∧␈↓␈↓↓within␈α
the␈α∞theory␈↓,␈α
the␈α∞decision␈α
may␈α∞well␈α
be␈α∞rigorous␈α
even␈α∞though␈α
no-one␈α∞bothers␈α
to␈α∞imagine␈α
an
␈↓ α∧␈↓alternate␈α∂world␈α∂as␈α∂much␈α∂like␈α∂the␈α∂real␈α∂world␈α∂as␈α∂possible␈α∂but␈α∂in␈α∂which␈α∂the␈α∂student␈α∂had␈α∂put␈α∞his
␈↓ α∧␈↓weight on his downhill ski.

␈↓ α∧␈↓␈↓ αTWe␈αsuggest␈αthat␈αthis␈αis␈αoften␈α(I␈αhaven't␈αyet␈αlooked␈αfor␈αcounter-examples)␈αthe␈αcommon␈αsense
␈↓ α∧␈↓meaning␈αof␈αa␈αcounterfactual.␈α The␈αcounterfactual␈αhas␈αa␈αdefinite␈αmeaning␈αin␈αa␈αtheory,␈αbecause␈αthe
␈↓ α∧␈↓theory␈αhas␈α
a␈αCartesian␈α
product␈αstructure,␈αand␈α
the␈αtheory␈α
is␈αsufficiently␈α
preferred␈αthat␈αthe␈α
meaning
␈↓ α∧␈↓of␈α
the␈α
counterfactual␈α
in␈α
the␈α
world␈α
is␈α
taken␈αas␈α
its␈α
meaning␈α
in␈α
the␈α
theory.␈α
 This␈α
is␈α
especially␈αlikely␈α
to
␈↓ α∧␈↓be␈αtrue␈αfor␈αconcepts␈αthat␈αhave␈αa␈αnatural␈αdefinition␈αin␈αterms␈αof␈αcounterfactuals,␈αe.g.␈αthe␈αconcept␈αof
␈↓ α∧␈↓␈↓↓deliberate action␈↓ with which we started this section.

␈↓ α∧␈↓␈↓ αTIn␈α⊃all␈α⊃cases␈α⊃that␈α⊃we␈α⊃know␈α⊃about,␈α⊃the␈α⊃theory␈α⊃is␈α⊃approximate␈α⊃and␈α⊃incomplete.␈α⊂ Provided
␈↓ α∧␈↓certain␈αpropositions␈αare␈αtrue,␈αa␈αcertain␈αquantity␈α
is␈αapproximately␈αa␈αgiven␈αfunction␈αof␈αcertain␈α
other
␈↓ α∧␈↓quantities.␈α The␈α
incompleteness␈αlies␈αin␈α
the␈αfact␈αthat␈α
the␈αtheory␈αdoesn't␈α
predict␈αstates␈αof␈α
the␈αworld
␈↓ α∧␈↓but␈α∂only␈α∂certain␈α⊂functions␈α∂of␈α∂them.␈α∂ Thus␈α⊂a␈α∂useful␈α∂concept␈α∂like␈α⊂deliberate␈α∂action␈α∂may␈α⊂seem␈α∂to
␈↓ α∧␈↓vanish␈α
if␈α
examined␈α
too␈αclosely,␈α
e.g.␈α
when␈α
we␈α
try␈αto␈α
define␈α
it␈α
in␈α
terms␈αof␈α
states␈α
of␈α
the␈α
world␈αand␈α
not
␈↓ α∧␈↓just in terms of certain functions of these states.

␈↓ α∧␈↓Remarks:

␈↓ α∧␈↓␈↓ β$1.1.␈αThe␈α
known␈αcases␈αin␈α
which␈αa␈αconcept␈α
is␈αdefined␈αrelative␈α
to␈αan␈αapproximate␈α
theory
␈↓ α∧␈↓involve counterfactuals.  This may not always be the case.

␈↓ α∧␈↓␈↓ β$1.2. It is important to study the nature of the approximations.
␈↓ α∧␈↓␈↓ u8


␈↓ α∧␈↓␈↓ β$1.3.␈α∞(McCarthy␈α∞and␈α∞Hayes␈α∞1969)␈α
treats␈α∞the␈α∞notion␈α∞of␈α∞␈↓↓X␈α
can␈α∞do␈α∞Y␈↓␈α∞using␈α∞a␈α∞theory␈α
in
␈↓ α∧␈↓which␈α∞the␈α∞world␈α∞is␈α∞regarded␈α∞as␈α∞a␈α
collection␈α∞of␈α∞interacting␈α∞automata.␈α∞ That␈α∞paper␈α∞failed␈α∞to␈α
note
␈↓ α∧␈↓that sentences using ␈↓↓can␈↓ cannot necessarily be translated into single assertions about the world.

␈↓ α∧␈↓␈↓ β$1.4.␈α
The␈α
attempt␈α
by␈αold␈α
fashioned␈α
introspective␈α
psychology␈αto␈α
analyze␈α
the␈α
mind␈αinto
␈↓ α∧␈↓an␈α⊂interacting␈α⊂␈↓↓will,␈↓␈α∂␈↓↓intellect␈↓␈α⊂and␈α⊂other␈α∂components␈α⊂cannot␈α⊂be␈α∂excluded␈α⊂on␈α⊂the␈α∂methodological
␈↓ α∧␈↓grounds␈αused␈αby␈αbehaviorists␈αand␈αpostitivists␈αto␈αdeclare␈αthem␈αmeaningless␈αand␈αexclude␈αthem␈αfrom
␈↓ α∧␈↓science.  These concepts might have precise definitions within a suitable approximate theory.␈↓∧5␈↓

␈↓ α∧␈↓␈↓ β$1.5.␈αThe␈αabove␈αtreatment␈α
of␈αcounterfactuals␈αin␈αwhich␈αthey␈α
are␈αdefined␈αin␈αterms␈αof␈α
the
␈↓ α∧␈↓Cartesian␈α
product␈αstructure␈α
of␈αan␈α
approximate␈αtheory␈α
may␈α
be␈αbetter␈α
than␈αthe␈α
␈↓↓closest␈αpossible␈α
world␈↓
␈↓ α∧␈↓treatments␈αdiscussed␈α
in␈α(Lewis␈α
1973).␈α The␈αtruth-values␈α
are␈αwell␈α
defined␈αwithin␈α
the␈αapproximate
␈↓ α∧␈↓theories,␈α∞and␈α∂the␈α∞theories␈α∂can␈α∞be␈α∂justified␈α∞by␈α∂evidence␈α∞involving␈α∂phenomena␈α∞not␈α∂mentioned␈α∞in
␈↓ α∧␈↓isolated counterfactual assertions.

␈↓ α∧␈↓␈↓ β$1.6.␈α
Definition␈α
relative␈αto␈α
approximate␈α
theories␈α
may␈αhelp␈α
separate␈α
questions,␈α
such␈αas
␈↓ α∧␈↓some␈α∞of␈α
those␈α∞concerning␈α∞counterfactuals,␈α
into␈α∞␈↓↓internal␈↓␈α∞questions␈α
within␈α∞the␈α∞approximate␈α
theory
␈↓ α∧␈↓and␈αthe␈α
␈↓↓external␈↓␈αquestion␈α
of␈αthe␈α
justification␈αof␈αthe␈α
theory␈αas␈α
a␈αwhole.␈α
 The␈αinternal␈αquestions␈α
are
␈↓ α∧␈↓likely␈α∂to␈α∞be␈α∂technical␈α∞and␈α∂have␈α∞definite␈α∂answers␈α∂on␈α∞which␈α∂people␈α∞can␈α∂agree␈α∞even␈α∂if␈α∂they␈α∞have
␈↓ α∧␈↓philosophical or scientific disagreements about the external questions.

␈↓ α∧␈↓2. ␈↓αSecond Order Structural Definition.␈↓

␈↓ α∧␈↓␈↓ αTStructural␈α∩definitions␈α∩of␈α∩qualities␈α∩are␈α∩given␈α∩in␈α∩terms␈α∩of␈α∩the␈α∩state␈α∩of␈α∩the␈α∪system␈α∩being
␈↓ α∧␈↓described while behavioral definitions are given in terms of its actual or potential behavior␈↓∧6␈↓.

␈↓ α∧␈↓␈↓ αTIf␈α∂the␈α∂structure␈α⊂of␈α∂the␈α∂machine␈α∂is␈α⊂known,␈α∂one␈α∂can␈α∂give␈α⊂an␈α∂ad␈α∂hoc␈α∂␈↓↓first␈α⊂order␈α∂structural
␈↓ α∧␈↓↓definition␈↓.␈α
 This␈α
is␈α
a␈α
predicate␈α
␈↓↓B(s,p)␈↓␈α
where␈α
␈↓↓s␈↓␈α
represents␈α
a␈α
state␈α
of␈α
the␈α
machine␈α
and␈α
␈↓↓p␈↓␈α
represents␈α
a
␈↓ α∧␈↓sentence␈αin␈α
a␈αsuitable␈α
language,␈αand␈α
␈↓↓B(s,p)␈↓␈αis␈αthe␈α
assertion␈αthat␈α
when␈αthe␈α
machine␈αis␈α
in␈αstate␈α␈↓↓s,␈↓␈α
it
␈↓ α∧␈↓␈↓↓believes␈↓␈α⊂the␈α⊂sentence␈α⊂␈↓↓p.␈↓␈α⊂(The␈α⊂considerations␈α⊃of␈α⊂this␈α⊂paper␈α⊂are␈α⊂neutral␈α⊂in␈α⊂deciding␈α⊃whether␈α⊂to
␈↓ α∧␈↓regard␈α
the␈α
object␈α
of␈α∞belief␈α
as␈α
a␈α
sentence␈α∞or␈α
to␈α
use␈α
a␈α
modal␈α∞operator␈α
or␈α
to␈α
admit␈α∞␈↓↓propositions␈↓␈α
as
␈↓ α∧␈↓abstract␈αobjects␈αthat␈αcan␈αbe␈αbelieved.␈α The␈αpaper␈αis␈αwritten␈αas␈αthough␈αsentences␈αare␈αthe␈αobjects␈αof
␈↓ α∧␈↓belief, but I have more recently come to favor propositions and discuss them in (McCarthy 1977a).

␈↓ α∧␈↓␈↓ αTA␈α⊃general␈α⊃␈↓↓first␈↓␈α⊂␈↓↓order␈↓␈α⊃structural␈α⊃definition␈α⊂of␈α⊃belief␈α⊃would␈α⊂be␈α⊃a␈α⊃predicate␈α⊂␈↓↓B(W,M,s,p)␈↓
␈↓ α∧␈↓where␈α␈↓↓W␈↓␈α
is␈αthe␈α"world"␈α
in␈αwhich␈α
the␈αmachine␈α␈↓↓M␈↓␈α
whose␈αbeliefs␈α
are␈αin␈αquestion␈α
is␈αsituated.␈α
 I␈αdo
␈↓ α∧␈↓not␈αsee␈αhow␈αto␈αgive␈αsuch␈αa␈αdefinition␈αof␈αbelief,␈αand␈αI␈αthink␈αit␈αis␈αimpossible.␈α Therefore␈αwe␈αturn␈α
to
␈↓ α∧␈↓second order definitions␈↓∧7␈↓.

␈↓ α∧␈↓␈↓ αTA␈α∩second␈α⊃order␈α∩structural␈α⊃definition␈α∩of␈α⊃belief␈α∩is␈α⊃a␈α∩second␈α⊃order␈α∩predicate␈α⊃␈↓↓β(W,M,B).␈↓
␈↓ α∧␈↓␈↓↓β(W,M,B)␈↓␈αasserts␈αthat␈αthe␈αfirst␈αorder␈αpredicate␈α␈↓↓B␈↓␈αis␈αa␈α"good"␈αnotion␈αof␈αbelief␈αfor␈αthe␈αmachine␈α␈↓↓M␈↓
␈↓ α∧␈↓in␈αthe␈αworld␈α␈↓↓W.␈↓␈αHere␈α"good"␈αmeans␈αthat␈αthe␈αbeliefs␈αthat␈α␈↓↓B␈↓␈αascribes␈αto␈α␈↓↓M␈↓␈αagree␈αwith␈αour␈αideas␈αof
␈↓ α∧␈↓what␈α⊂beliefs␈α∂␈↓↓M␈↓␈α⊂would␈α∂have,␈α⊂not␈α⊂that␈α∂the␈α⊂beliefs␈α∂themselves␈α⊂are␈α∂true.␈α⊂ The␈α⊂axiomatizations␈α∂of
␈↓ α∧␈↓belief in the literature are partial second order definitions.

␈↓ α∧␈↓␈↓ αTIn␈α⊂general,␈α⊂␈↓αa␈α⊃second␈α⊂order␈α⊂definition␈α⊃gives␈α⊂criteria␈α⊂for␈α⊃criticizing␈α⊂an␈α⊂ascription␈α⊃of␈α⊂a
␈↓ α∧␈↓αquality␈αto␈αa␈αsystem.␈↓␈αWe␈α
suggest␈αthat␈αboth␈αour␈αcommon␈α
sense␈αand␈αscientific␈αusage␈αof␈α
not-directly-
␈↓ α∧␈↓observable␈α
qualities␈αcorresponds␈α
more␈α
losely␈αto␈α
second␈αorder␈α
structural␈α
definition␈αthan␈α
to␈αany␈α
kind
␈↓ α∧␈↓of␈α∞behavioral␈α∞definition.␈α∞ Note␈α∞that␈α∞a␈α
second␈α∞order␈α∞definition␈α∞cannot␈α∞guarantee␈α∞that␈α∞there␈α
exist
␈↓ α∧␈↓␈↓ u9


␈↓ α∧␈↓predicates␈α␈↓↓B␈↓␈αmeeting␈αthe␈αcriterion␈α
β␈αor␈αthat␈αsuch␈αa␈α␈↓↓B␈↓␈α
is␈αunique.␈α Some␈αqualities␈αare␈α
best␈αdefined
␈↓ α∧␈↓jointly with related qualities, e.g. beliefs and goals may require joint treatment.

␈↓ α∧␈↓␈↓ αTSecond␈αorder␈αdefinitions␈αcriticize␈αwhole␈αbelief␈αstructures␈αrather␈αthan␈αindividual␈αbeliefs.␈α We
␈↓ α∧␈↓can␈αtreat␈αindividual␈αbeliefs␈αby␈αsaying␈αthat␈αa␈αsystem␈αbelieves␈α␈↓↓p␈↓␈αin␈αstate␈α␈↓↓s␈↓␈αprovided␈αall␈α"reasonably
␈↓ α∧␈↓good" ␈↓↓B␈↓'s satisfy ␈↓↓B(s,p)␈↓.  Thus we are distinguishing the "intersection" of the reasonably good ␈↓↓B␈↓'s.

␈↓ α∧␈↓␈↓ αT(An␈α∀analogy␈α∀with␈α∀cryptography␈α∀may␈α∀be␈α∪helpful.␈α∀ We␈α∀solve␈α∀a␈α∀cryptogram␈α∀by␈α∪making
␈↓ α∧␈↓hypotheses␈αabout␈αthe␈αstructure␈αof␈αthe␈αcipher␈αand␈αabout␈αthe␈αtranslation␈αof␈αparts␈αof␈αthe␈αcipher␈αtext.
␈↓ α∧␈↓Our␈α
solution␈α
is␈α
complete␈α
when␈α
we␈α
have␈α"guessed"␈α
a␈α
cipher␈α
system␈α
that␈α
produces␈α
the␈αcryptogram
␈↓ α∧␈↓from␈α∂a␈α∂plausible␈α∂plaintext␈α⊂message.␈α∂ Though␈α∂we␈α∂never␈α∂prove␈α⊂that␈α∂our␈α∂solution␈α∂is␈α⊂unique,␈α∂two
␈↓ α∧␈↓different␈αsolutions␈αare␈αalmost␈αnever␈αfound␈αexcept␈αfor␈αvery␈αshort␈αcryptograms.␈α In␈αthe␈αanalogy,␈αthe
␈↓ α∧␈↓second␈αorder␈αdefinition␈α
β␈αcorresponds␈αto␈α
the␈αgeneral␈αidea␈α
of␈αencipherment,␈αand␈α
␈↓↓B␈↓␈αis␈αthe␈α
particular
␈↓ α∧␈↓system␈α
used.␈α∞ While␈α
we␈α∞will␈α
rarely␈α∞be␈α
able␈α∞to␈α
prove␈α∞uniqueness,␈α
we␈α∞don't␈α
expect␈α∞to␈α
find␈α∞two␈α
␈↓↓B␈↓s
␈↓ α∧␈↓both satisfying β).

␈↓ α∧␈↓␈↓ αTIt␈αseems␈αto␈αme␈αthat␈αthere␈αshould␈αbe␈αa␈αmetatheorem␈αof␈αmathematical␈αlogic␈αasserting␈αthat␈αnot
␈↓ α∧␈↓all␈α∪second␈α∪order␈α∪definitions␈α∪can␈α∪be␈α∩reduced␈α∪to␈α∪first␈α∪order␈α∪definitions␈α∪and␈α∪further␈α∩theorems
␈↓ α∧␈↓characterizing␈αthose␈α
second␈αorder␈α
definitions␈αthat␈αadmit␈α
such␈αreductions.␈α
 Such␈αtechnical␈αresults,␈α
if
␈↓ α∧␈↓they␈α⊂can␈α∂be␈α⊂found,␈α∂may␈α⊂be␈α∂helpful␈α⊂in␈α∂philosophy␈α⊂and␈α∂in␈α⊂the␈α∂construction␈α⊂of␈α⊂formal␈α∂scientific
␈↓ α∧␈↓theories.␈α⊃ I␈α⊃would␈α⊃conjecture␈α⊃that␈α⊃many␈α⊃of␈α⊃the␈α⊃informal␈α⊃philosophical␈α⊃arguments␈α⊃that␈α⊂certain
␈↓ α∧␈↓mental␈αconcepts␈αcannot␈αbe␈αreduced␈αto␈αphysics␈αwill␈αturn␈αout␈αto␈αbe␈αsketches␈αof␈αarguments␈αthat␈αthese
␈↓ α∧␈↓concepts require second (or higher) order definitions.

␈↓ α∧␈↓␈↓ αTHere␈αis␈αan␈αapproximate␈α
second␈αorder␈αdefinition␈αof␈αbelief.␈α
 For␈αeach␈αstate␈α␈↓↓s␈↓␈αof␈α
the␈αmachine
␈↓ α∧␈↓and␈α⊂each␈α⊂sentence␈α⊂␈↓↓p␈↓␈α⊂in␈α⊂a␈α⊂suitable␈α⊂language␈α⊂␈↓↓L,␈↓␈α⊂we␈α⊂assign␈α⊂truth␈α⊂to␈α⊂␈↓↓B(s,p)␈↓␈α⊂if␈α⊂and␈α⊂only␈α⊃if␈α⊂the
␈↓ α∧␈↓machine␈α⊂is␈α⊂considered␈α⊂to␈α⊂believe␈α⊂␈↓↓p␈↓␈α⊂when␈α⊂it␈α⊂is␈α⊂in␈α⊂state␈α⊂␈↓↓s␈↓.␈α⊂ The␈α⊂language␈α⊂␈↓↓L␈↓␈α⊂is␈α⊂chosen␈α⊃for␈α⊂our
␈↓ α∧␈↓convenience,␈α
and␈αthere␈α
is␈α
no␈αassumption␈α
that␈α
the␈αmachine␈α
explicitly␈α
represents␈αsentences␈α
of␈α
␈↓↓L␈↓␈αin
␈↓ α∧␈↓any␈α∂way.␈α∂ Thus␈α∂we␈α⊂can␈α∂talk␈α∂about␈α∂the␈α∂beliefs␈α⊂of␈α∂Chinese,␈α∂dogs,␈α∂corporations,␈α⊂thermostats,␈α∂and
␈↓ α∧␈↓computer␈α∂operating␈α∂systems␈α∂without␈α∞assuming␈α∂that␈α∂they␈α∂use␈α∞English␈α∂or␈α∂our␈α∂favorite␈α∂first␈α∞order
␈↓ α∧␈↓language.␈α∞ ␈↓↓L␈↓␈α∞may␈α∞or␈α∞may␈α∂not␈α∞be␈α∞the␈α∞language␈α∞be␈α∞the␈α∂language␈α∞we␈α∞are␈α∞using␈α∞for␈α∂making␈α∞other
␈↓ α∧␈↓assertions,␈α
e.g.␈α
we␈αcould,␈α
writing␈α
in␈αEnglish,␈α
systematically␈α
use␈α
French␈αsentences␈α
as␈α
objects␈αof␈α
belief.
␈↓ α∧␈↓However,␈αthe␈αbest␈αchoice␈αfor␈αartificial␈αintelligence␈αwork␈αmay␈αbe␈αto␈αmake␈α␈↓↓L␈↓␈αa␈αsubset␈αof␈αour␈α"outer"
␈↓ α∧␈↓language restricted so as to avoid the paradoxical self-references of (Montague 1963).

␈↓ α∧␈↓␈↓ αTWe␈α⊃now␈α⊃subject␈α⊃␈↓↓B(s,p)␈↓␈α⊃to␈α⊃certain␈α⊃criteria;␈α⊃i.e.␈α⊃β␈↓↓(B,W)␈↓␈α⊃is␈α⊃considered␈α⊃true␈α⊃provided␈α⊃the
␈↓ α∧␈↓following conditions are satisfied:

␈↓ α∧␈↓␈↓ β$2.1.␈αThe␈αset␈α␈↓↓Bel(s)␈↓␈αof␈αbeliefs,␈αi.e.␈αthe␈αset␈αof␈α␈↓↓p␈↓'s␈αfor␈αwhich␈α␈↓↓B(s,p)␈↓␈αis␈αassigned␈αtrue␈αwhen
␈↓ α∧␈↓␈↓↓M␈↓ is in state ␈↓↓s␈↓ contains sufficiently "obvious" consequences of some of its members.

␈↓ α∧␈↓␈↓ β$2.2.␈α ␈↓↓Bel(s)␈↓␈αchanges␈αin␈αa␈αreasonable␈αway␈αwhen␈αthe␈αstate␈αchanges␈αin␈αtime.␈α We␈αlike␈α
new
␈↓ α∧␈↓beliefs␈α∞to␈α∞be␈α
logical␈α∞or␈α∞"plausible"␈α
consequences␈α∞of␈α∞old␈α
ones␈α∞or␈α∞to␈α
come␈α∞in␈α∞as␈α∞␈↓↓communications␈↓␈α
in
␈↓ α∧␈↓some␈α∂language␈α∂on␈α∂the␈α∂input␈α∂lines␈α∂or␈α∂to␈α∂be␈α∂␈↓↓observations␈↓,␈α∂i.e.␈α∂ beliefs␈α∂about␈α∂the␈α∂environment␈α∂the
␈↓ α∧␈↓information␈α∂for␈α∂which␈α⊂comes␈α∂in␈α∂on␈α∂the␈α⊂input␈α∂lines.␈α∂ The␈α∂set␈α⊂of␈α∂beliefs␈α∂should␈α∂not␈α⊂change␈α∂too
␈↓ α∧␈↓rapidly as the state changes with time.

␈↓ α∧␈↓␈↓ β$2.3.␈α∀ We␈α∀prefer␈α∀the␈α∀set␈α∀of␈α∀beliefs␈α∀to␈α∀be␈α∀as␈α∀consistent␈α∀as␈α∀possible.␈α∀ (Admittedly,
␈↓ α∧␈↓consistency␈α
is␈α∞not␈α
a␈α∞quantitative␈α
concept␈α∞in␈α
mathematical␈α∞logic␈α
-␈α∞a␈α
system␈α∞is␈α
either␈α∞consistent␈α
or
␈↓ α∧␈↓␈↓ f10


␈↓ α∧␈↓not,␈α⊃but␈α⊃it␈α⊃would␈α⊃seem␈α⊃that␈α⊃we␈α⊃will␈α⊃sometimes␈α⊃have␈α⊃to␈α⊃ascribe␈α⊃inconsistent␈α⊃sets␈α⊃of␈α⊃beliefs␈α⊃to
␈↓ α∧␈↓machines␈αand␈αpeople.␈α Our␈αintuition␈αsays␈αthat␈αwe␈αshould␈αbe␈αable␈αto␈αmaintain␈αareas␈αof␈αconsistency
␈↓ α∧␈↓in␈α∞our␈α∞beliefs␈α∞and␈α∞that␈α
it␈α∞may␈α∞be␈α∞especially␈α∞important␈α
to␈α∞avoid␈α∞inconsistencies␈α∞in␈α∞the␈α
machine's
␈↓ α∧␈↓purely analytic beliefs).

␈↓ α∧␈↓␈↓ β$2.4.␈α∞ Our␈α∞criteria␈α∂for␈α∞belief␈α∞systems␈α∂can␈α∞be␈α∞strengthened␈α∞if␈α∂we␈α∞identify␈α∞some␈α∂of␈α∞the
␈↓ α∧␈↓machine's␈α
beliefs␈αas␈α
expressing␈αgoals,␈α
i.e.␈αif␈α
we␈αhave␈α
beliefs␈αof␈α
the␈αform␈α
"It␈αwould␈α
be␈αgood␈α
if␈α...".
␈↓ α∧␈↓Then␈α
we␈α
can␈α
ask␈α
that␈α
the␈α
machine's␈α
behavior␈α
be␈α
somewhat␈α
␈↓↓rational␈↓,␈α
i.e.␈α
 ␈↓↓it␈α
does␈α
what␈α∞it␈α
believes
␈↓ α∧␈↓↓will␈αachieve␈αits␈αgoals␈↓.␈αThe␈αmore␈αof␈αits␈αbehavior␈αwe␈αcan␈αaccount␈αfor␈αin␈αthis␈αway,␈αthe␈αbetter␈αwe␈αwill
␈↓ α∧␈↓like␈αthe␈αfunction␈α␈↓↓B(s,p)␈↓.␈α We␈αalso␈αwould␈αlike␈αto␈αregard␈αinternal␈αstate␈αchanges␈αas␈αchanges␈αin␈αbelief
␈↓ α∧␈↓in so far as this is reasonable.

␈↓ α∧␈↓␈↓ β$2.5.␈α
 If␈α
the␈α
machine␈α
communicates,␈α
i.e.␈α
emits␈α
sentences␈α
in␈α
some␈α
language␈α
that␈α
can␈α
be
␈↓ α∧␈↓interpreted␈α∞as␈α∂assertions,␈α∞questions␈α∞and␈α∂commands,␈α∞we␈α∞will␈α∂want␈α∞the␈α∞assertions␈α∂to␈α∞be␈α∂among␈α∞its
␈↓ α∧␈↓beliefs␈α⊂unless␈α⊃we␈α⊂are␈α⊃ascribing␈α⊂to␈α⊂it␈α⊃a␈α⊂goal␈α⊃or␈α⊂subgoal␈α⊂that␈α⊃involves␈α⊂lying.␈α⊃ We␈α⊂will␈α⊃be␈α⊂most
␈↓ α∧␈↓satisfied␈α∂with␈α∞our␈α∂belief␈α∞ascription,␈α∂if␈α∂we␈α∞can␈α∂account␈α∞for␈α∂its␈α∞communications␈α∂as␈α∂furthering␈α∞the
␈↓ α∧␈↓goals we are ascribing.

␈↓ α∧␈↓␈↓ β$2.6.␈α Sometimes␈αwe␈αshall␈αwant␈αto␈αascribe␈αintrospective␈αbeliefs,␈αe.g.␈αa␈αbelief␈αthat␈αit␈αdoes
␈↓ α∧␈↓not know how to fly to Boston or even that it doesn't know what it wants in a certain situation.

␈↓ α∧␈↓␈↓ β$2.7.␈α
Finally,␈α∞we␈α
will␈α
prefer␈α∞a␈α
more␈α
economical␈α∞ascription␈α
␈↓↓B␈↓␈α
to␈α∞a␈α
less␈α∞economical␈α
one.
␈↓ α∧␈↓The␈α
fewer␈α
beliefs␈αwe␈α
ascribe␈α
and␈αthe␈α
less␈α
they␈αchange␈α
with␈α
state␈αconsistent␈α
with␈α
accounting␈αfor␈α
the
␈↓ α∧␈↓behavior␈α~and␈α→the␈α~internal␈α→state␈α~changes,␈α~the␈α→better␈α~we␈α→will␈α~like␈α→it.␈α~ In␈α~particular,␈α→if
␈↓ α∧␈↓␈↓↓∀s p.(B1(s,p) ⊃ B2(s,p))␈↓,␈α∩but␈α∪not␈α∩conversely,␈α∩and␈α∪␈↓↓B1␈↓␈α∩accounts␈α∩for␈α∪all␈α∩the␈α∩state␈α∪changes␈α∩and
␈↓ α∧␈↓outputs␈α
that␈α
␈↓↓B2␈↓␈αdoes,␈α
we␈α
will␈αprefer␈α
␈↓↓B1␈↓␈α
to␈α␈↓↓B2.␈↓␈α
This␈α
insures␈αthat␈α
we␈α
will␈αprefer␈α
to␈α
assign␈αno␈α
beliefs
␈↓ α∧␈↓to␈α⊂stones␈α⊂that␈α⊂don't␈α⊂change␈α⊂and␈α⊂don't␈α⊂behave.␈α∂ A␈α⊂belief␈α⊂predicate␈α⊂that␈α⊂applies␈α⊂to␈α⊂a␈α⊂family␈α∂of
␈↓ α∧␈↓machines is preferable to one that applies to a single machine.

␈↓ α∧␈↓␈↓ αTThe␈α
above␈α
criteria␈α
have␈α
been␈α
formulated␈α
somewhat␈α
vaguely.␈α
 This␈α
would␈α
be␈α
bad␈α∞if␈α
there
␈↓ α∧␈↓were␈αwidely␈α
different␈αascriptions␈α
of␈αbeliefs␈α
to␈αa␈αparticular␈α
machine␈αthat␈α
all␈αmet␈α
our␈αcriteria␈α
or␈αif
␈↓ α∧␈↓the␈α∞criteria␈α∞allowed␈α∞ascriptions␈α∞that␈α∞differed␈α∞widely␈α∞from␈α∞our␈α∞intuitions.␈α∞ My␈α∞present␈α∞opinion␈α
is
␈↓ α∧␈↓that␈α∞more␈α∞thought␈α
will␈α∞make␈α∞the␈α
criteria␈α∞somewhat␈α∞more␈α
precise␈α∞at␈α∞no␈α
cost␈α∞in␈α∞applicability,␈α
but
␈↓ α∧␈↓that␈αthey␈α␈↓↓should␈↓␈αstill␈αremain␈αrather␈αvague,␈αi.e.␈αwe␈αshall␈αwant␈αto␈αascribe␈αbelief␈αin␈αa␈α␈↓↓family␈↓␈αof␈αcases.
␈↓ α∧␈↓However,␈α⊃even␈α⊃at␈α⊃the␈α⊃present␈α⊃level␈α⊃of␈α⊃vagueness,␈α⊃there␈α⊃probably␈α⊃won't␈α⊃be␈α∩radically␈α⊃different
␈↓ α∧␈↓equally␈α
"good"␈αascriptions␈α
of␈αbelief␈α
for␈α
systems␈αof␈α
practical␈αinterest.␈α
 If␈α
there␈αwere,␈α
we␈αwould␈α
notice
␈↓ α∧␈↓unresolvable ambiguities in our ascriptions of belief to our acquaintances.

␈↓ α∧␈↓␈↓ αTWhile␈αwe␈αmay␈α
not␈αwant␈αto␈α
pin␈αdown␈αour␈α
general␈αidea␈αof␈α
belief␈αto␈αa␈α
single␈αaxiomatization,
␈↓ α∧␈↓we␈α
will␈αneed␈α
to␈αbuild␈α
precise␈α
axiomatizations␈αof␈α
belief␈αand␈α
other␈αmental␈α
qualities␈α
into␈αparticular
␈↓ α∧␈↓intelligent computer programs.
␈↓ α∧␈↓␈↓ f11


␈↓ α∧␈↓α␈↓ βxEXAMPLES OF SYSTEMS WITH MENTAL QUALITIES

␈↓ α∧␈↓␈↓ αTLet␈α
us␈α
consider␈α
some␈αexamples␈α
of␈α
machines␈α
and␈α
programs␈αto␈α
which␈α
we␈α
may␈α
ascribe␈αbelief
␈↓ α∧␈↓and goal structures.

␈↓ α∧␈↓␈↓ αT1.␈α⊂ ␈↓αThermostats.␈↓␈α⊂Ascribing␈α⊂beliefs␈α⊂to␈α⊂simple␈α⊂thermostats␈α⊂is␈α⊂unnecessary␈α⊂for␈α⊂the␈α⊂study␈α∂of
␈↓ α∧␈↓thermostats,␈α⊃because␈α∩their␈α⊃operation␈α∩can␈α⊃be␈α⊃well␈α∩understood␈α⊃without␈α∩it.␈α⊃ However,␈α∩their␈α⊃very
␈↓ α∧␈↓simplicity␈α⊃makes␈α⊃it␈α∩clearer␈α⊃what␈α⊃is␈α⊃involved␈α∩in␈α⊃the␈α⊃ascription,␈α⊃and␈α∩we␈α⊃maintain␈α⊃(partly␈α∩as␈α⊃a
␈↓ α∧␈↓provocation␈αto␈αthose␈αwho␈αregard␈αattribution␈αof␈αbeliefs␈αto␈αmachines␈αas␈αmere␈αintellectual␈αsloppiness)
␈↓ α∧␈↓that the ascription is legitimate.␈↓∧8␈↓

␈↓ α∧␈↓␈↓ αTFirst␈αconsider␈αa␈αsimple␈αthermostat␈αthat␈αturns␈αoff␈αthe␈αheat␈αwhen␈αthe␈αtemperature␈αis␈αa␈αdegree
␈↓ α∧␈↓above␈αthe␈αtemperature␈αset␈αon␈αthe␈αthermostat,␈αturns␈αon␈αthe␈αheat␈αwhen␈αthe␈αtemperature␈αis␈αa␈αdegree
␈↓ α∧␈↓below␈α
the␈α
desired␈α
temperature,␈α
and␈α
leaves␈α
the␈α
heat␈α
as␈α
is␈α
when␈α
the␈α
temperature␈α
is␈α
in␈α
the␈α
two␈α
degree
␈↓ α∧␈↓range␈αaround␈αthe␈αdesired␈αtemperature.␈αThe␈αsimplest␈αbelief␈αpredicate␈α␈↓↓B(s,p)␈↓␈αascribes␈αbelief␈αto␈αonly
␈↓ α∧␈↓three␈α∂sentences:␈α∂"The␈α∂room␈α∂is␈α∂too␈α∂cold",␈α∂"The␈α∂room␈α∂is␈α∂too␈α∂hot",␈α∂and␈α∂"The␈α∂room␈α∂is␈α∂OK"␈α∂-␈α∞the
␈↓ α∧␈↓beliefs␈α
being␈α
assigned␈α
to␈α
states␈α
of␈α
the␈α
thermostat␈αin␈α
the␈α
obvious␈α
way.␈α
 We␈α
ascribe␈α
to␈α
it␈α
the␈αgoal,
␈↓ α∧␈↓"The␈αroom␈αshould␈αbe␈αok".␈α When␈αthe␈αthermostat␈αbelieves␈αthe␈αroom␈αis␈αtoo␈αcold␈αor␈αtoo␈αhot,␈αit␈αsends
␈↓ α∧␈↓a␈αmessage␈αsaying␈αso␈α
to␈αthe␈αfurnace.␈αA␈α
slightly␈αmore␈αcomplex␈αbelief␈α
predicate␈αcould␈αalso␈αbe␈αused␈α
in
␈↓ α∧␈↓which␈αthe␈α
thermostat␈αhas␈αa␈α
belief␈αabout␈α
what␈αthe␈αtemperature␈α
should␈αbe␈α
and␈αanother␈αbelief␈α
about
␈↓ α∧␈↓what␈α∂it␈α∂is.␈α∂ It␈α∂is␈α∂not␈α∂clear␈α∂which␈α∂is␈α⊂better,␈α∂but␈α∂if␈α∂we␈α∂wished␈α∂to␈α∂consider␈α∂possible␈α∂errors␈α⊂in␈α∂the
␈↓ α∧␈↓thermometer,␈αthen␈α
we␈αwould␈α
ascribe␈αbeliefs␈α
about␈αwhat␈α
the␈αtemperature␈α
is.␈αWe␈α
do␈αnot␈α
ascribe␈αto␈α
it
␈↓ α∧␈↓any␈α
other␈αbeliefs;␈α
it␈α
has␈αno␈α
opinion␈α
even␈αabout␈α
whether␈αthe␈α
heat␈α
is␈αon␈α
or␈α
off␈αor␈α
about␈αthe␈α
weather
␈↓ α∧␈↓or␈αabout␈αwho␈αwon␈αthe␈αbattle␈αof␈αWaterloo.␈α Moreover,␈αit␈αhas␈αno␈αintrospective␈αbeliefs;␈αi.e.␈αit␈αdoesn't
␈↓ α∧␈↓believe that it believes the room is too hot.

␈↓ α∧␈↓␈↓ αTLet␈α⊂us␈α⊂compare␈α⊃the␈α⊂above␈α⊂␈↓↓B(s,p)␈↓␈α⊃with␈α⊂the␈α⊂criteria␈α⊂of␈α⊃the␈α⊂previous␈α⊂section.␈α⊃ The␈α⊂belief
␈↓ α∧␈↓structure␈α∂is␈α∞consistent␈α∂(because␈α∂all␈α∞the␈α∂beliefs␈α∂are␈α∞independent␈α∂of␈α∂one␈α∞another),␈α∂they␈α∂arise␈α∞from
␈↓ α∧␈↓observation,␈α
and␈αthey␈α
result␈α
in␈αaction␈α
in␈α
accordance␈αwith␈α
the␈α
ascribed␈αgoal.␈α
 There␈α
is␈αno␈α
reasoning
␈↓ α∧␈↓and␈α
only␈α
commands␈α
(which␈α
we␈α
have␈αnot␈α
included␈α
in␈α
our␈α
discussion)␈α
are␈α
communicated.␈α Clearly
␈↓ α∧␈↓assigning␈αbeliefs␈αis␈αof␈α
modest␈αintellectual␈αbenefit␈αin␈α
this␈αcase.␈α However,␈αif␈α
we␈αconsider␈αthe␈αclass␈α
of
␈↓ α∧␈↓possible␈αthermostats,␈αthen␈αthe␈αascribed␈αbelief␈αstructure␈αhas␈αgreater␈αconstancy␈αthan␈αthe␈αmechanisms
␈↓ α∧␈↓for actually measuring and representing the temperature.

␈↓ α∧␈↓␈↓ αTThe␈α⊃temperature␈α⊂control␈α⊃system␈α⊂in␈α⊃my␈α⊃house␈α⊂may␈α⊃be␈α⊂described␈α⊃as␈α⊃follows:␈α⊂Thermostats
␈↓ α∧␈↓upstairs␈αand␈α
downstairs␈αtell␈α
the␈αcentral␈α
system␈αto␈αturn␈α
on␈αor␈α
shut␈αoff␈α
hot␈αwater␈α
flow␈αto␈αthese␈α
areas.
␈↓ α∧␈↓A␈α
central␈αwater-temperature␈α
thermostat␈α
tells␈αthe␈α
furnace␈α
to␈αturn␈α
on␈α
or␈αoff␈α
thus␈α
keeping␈αthe␈α
central
␈↓ α∧␈↓hot␈α
water␈α
reservoir␈α∞at␈α
the␈α
right␈α
temperature.␈α∞ Recently␈α
it␈α
was␈α
too␈α∞hot␈α
upstairs,␈α
and␈α∞the␈α
question
␈↓ α∧␈↓arose␈αas␈αto␈αwhether␈αthe␈αupstairs␈αthermostat␈αmistakenly␈α␈↓↓believed␈↓␈αit␈αwas␈αtoo␈αcold␈αupstairs␈αor␈α
whether
␈↓ α∧␈↓the␈α∂furnace␈α∂thermostat␈α∂mistakenly␈α∂␈↓↓believed␈α∂␈↓␈α∂the␈α∂water␈α∂was␈α∂too␈α∂cold.␈α∂ It␈α∂turned␈α∂out␈α∂that␈α∞neither
␈↓ α∧␈↓mistake␈α⊂was␈α⊂made;␈α⊂the␈α⊃downstairs␈α⊂controller␈α⊂␈↓↓tried␈↓␈α⊂to␈α⊂turn␈α⊃off␈α⊂the␈α⊂flow␈α⊂of␈α⊂water␈α⊃but␈α⊂␈↓↓couldn't␈↓,
␈↓ α∧␈↓because␈α∞the␈α∞valve␈α∂was␈α∞stuck.␈α∞ The␈α∞plumber␈α∂came␈α∞once␈α∞and␈α∞found␈α∂the␈α∞trouble,␈α∞and␈α∂came␈α∞again
␈↓ α∧␈↓when␈α
a␈αreplacement␈α
valve␈α
was␈αordered.␈α
 Since␈αthe␈α
services␈α
of␈αplumbers␈α
are␈αincreasingly␈α
expensive,
␈↓ α∧␈↓and␈αmicrocomputers␈αare␈αincreasingly␈αcheap,␈αone␈αis␈αled␈αto␈αdesign␈αa␈αtemperature␈αcontrol␈αsystem␈αthat
␈↓ α∧␈↓would ␈↓↓know␈↓ a lot more about the thermal state of the house and its own state of health.

␈↓ α∧␈↓␈↓ αTIn␈αthe␈αfirst␈αplace,␈αwhile␈αthe␈αpresent␈αsystem␈α␈↓↓couldn't␈↓␈αturn␈αoff␈αthe␈αflow␈αof␈αhot␈αwater␈αupstairs,
␈↓ α∧␈↓there␈αis␈αno␈αreason␈αto␈αascribe␈αto␈αit␈αthe␈α␈↓↓knowledge␈↓␈αthat␈αit␈αcouldn't,␈αand␈α␈↓↓a␈αfortiori␈↓␈αit␈αhad␈αno␈αability␈αto
␈↓ α∧␈↓␈↓↓communicate␈↓␈α
this␈α
␈↓↓fact␈↓␈αor␈α
to␈α
take␈α
it␈αinto␈α
account␈α
in␈α
controlling␈αthe␈α
system.␈α
 A␈α
more␈αadvanced␈α
system
␈↓ α∧␈↓␈↓ f12


␈↓ α∧␈↓would␈α
know␈α
whether␈α
the␈α∞␈↓↓actions␈↓␈α
it␈α
␈↓↓attempted␈↓␈α
succeeded,␈α∞and␈α
it␈α
would␈α
communicate␈α∞failures␈α
and
␈↓ α∧␈↓adapt␈αto␈αthem.␈α (We␈αadapted␈αto␈αthe␈αfailure␈αby␈αturning␈αoff␈αthe␈αwhole␈αsystem␈αuntil␈αthe␈αwhole␈αhouse
␈↓ α∧␈↓cooled␈αoff␈αand␈αthen␈αletting␈αthe␈αtwo␈αparts␈αwarm␈αup␈αtogether.␈α The␈αpresent␈αsystem␈αhas␈αthe␈α␈↓↓physical
␈↓ α∧␈↓↓capability␈↓ of doing this even if it hasn't the ␈↓↓knowledge␈↓ or the ␈↓↓will␈↓.

␈↓ α∧␈↓␈↓ αTWhile␈α⊃the␈α⊃thermostat␈α∩believes␈α⊃"The␈α⊃room␈α⊃is␈α∩too␈α⊃cold",␈α⊃there␈α⊃is␈α∩no␈α⊃need␈α⊃to␈α⊃say␈α∩that␈α⊃it
␈↓ α∧␈↓understands␈αthe␈αconcept␈αof␈α"too␈αcold".␈α The␈αinternal␈αstructure␈αof␈α"The␈αroom␈αis␈αtoo␈αcold"␈αis␈αa␈αpart
␈↓ α∧␈↓of our language, not its.

␈↓ α∧␈↓␈↓ αTConsider␈αa␈α
thermostat␈αwhose␈αwires␈α
to␈αthe␈α
furnace␈αhave␈αbeen␈α
cut.␈α Shall␈αwe␈α
still␈αsay␈α
that␈αit
␈↓ α∧␈↓knows␈α
whether␈α
the␈α
room␈α
is␈αtoo␈α
cold?␈α
 Since␈α
fixing␈α
the␈αthermostat␈α
might␈α
well␈α
be␈α
aided␈αby␈α
ascribing
␈↓ α∧␈↓this␈αknowledge,␈αwe␈αwould␈αlike␈αto␈αdo␈αso.␈α Our␈αexcuse␈αis␈αthat␈αwe␈αare␈αentitled␈αto␈αdistinguish␈α-␈αin␈α
our
␈↓ α∧␈↓language␈α⊂-␈α⊂the␈α∂concept␈α⊂of␈α⊂a␈α⊂broken␈α∂temperature␈α⊂control␈α⊂system␈α⊂from␈α∂the␈α⊂concept␈α⊂of␈α⊂a␈α∂certain
␈↓ α∧␈↓collection of parts, i.e. to make intensional characterizations of physical objects.



␈↓ α∧␈↓␈↓ αT2.␈α
␈↓αSelf-reproducing␈αintelligent␈α
configurations␈αin␈α
a␈αcellular␈α
automaton␈αworld␈↓.␈α
 A␈α␈↓↓cellular␈↓
␈↓ α∧␈↓␈↓↓automaton␈↓␈α␈↓↓system␈↓␈αassigns␈αa␈αfinite␈αautomaton␈αto␈αeach␈αpoint␈αof␈αthe␈αplane␈αwith␈αinteger␈αco-ordinates.
␈↓ α∧␈↓The␈α⊂state␈α⊂of␈α⊂each␈α⊂automaton␈α⊂at␈α⊂time␈α⊂␈↓↓t+1␈↓␈α⊂depends␈α⊂on␈α⊂its␈α⊂state␈α⊂at␈α⊂time␈α⊂␈↓↓t␈↓␈α⊂and␈α⊂the␈α⊂states␈α⊂of␈α⊂its
␈↓ α∧␈↓neighbors␈αat␈αtime␈α␈↓↓t␈↓.␈α An␈αearly␈αuse␈αof␈αcellular␈αautomata␈αwas␈αby␈αvon␈αNeumann␈α(196?)␈αwho␈αfound␈αa
␈↓ α∧␈↓27␈αstate␈αautomaton␈αwhose␈αcells␈αcould␈αbe␈αinitialized␈αinto␈αa␈αself-reproducing␈αconfiguration␈αthat␈αwas
␈↓ α∧␈↓also␈α
a␈αuniversal␈α
computer.␈α The␈α
basic␈α
automaton␈αin␈α
von␈αNeumann's␈α
system␈α
had␈αa␈α
"resting"␈αstate␈α
0,
␈↓ α∧␈↓and␈αa␈αpoint␈αin␈αstate␈α0␈αwhose␈αfour␈αneighbors␈αwere␈αalso␈αin␈αthat␈αstate␈αwould␈αremain␈αin␈αstate␈α0.␈α The
␈↓ α∧␈↓initial␈αconfigurations␈α
considered␈αhad␈αall␈α
but␈αa␈αfinite␈α
number␈αof␈αcells␈α
in␈αstate␈α0,␈α
and,␈αof␈αcourse,␈α
this
␈↓ α∧␈↓property would persist although the number of non-zero cells might grow indefinitely with time.

␈↓ α∧␈↓␈↓ αTThe␈α∂self-reproducing␈α∂system␈α∞used␈α∂the␈α∂states␈α∞of␈α∂a␈α∂long␈α∞strip␈α∂of␈α∂non-zero␈α∞cells␈α∂as␈α∂a␈α∞"tape"
␈↓ α∧␈↓containing␈α
instructions␈α∞to␈α
a␈α∞"universal␈α
constructor"␈α
configuration␈α∞that␈α
would␈α∞construct␈α
a␈α∞copy␈α
of
␈↓ α∧␈↓the␈αconfiguration␈αto␈αbe␈αreproduced␈αbut␈αwith␈αeach␈αcell␈αin␈αa␈αpassive␈αstate␈αthat␈αwould␈αpersist␈αas␈α
long
␈↓ α∧␈↓as␈α⊂its␈α⊂neighbors␈α⊂were␈α⊂also␈α⊂in␈α⊂passive␈α⊂states.␈α⊂ After␈α⊂the␈α⊂construction␈α⊂phase,␈α⊂the␈α⊂tape␈α⊃would␈α⊂be
␈↓ α∧␈↓copied␈αto␈α
make␈αthe␈αtape␈α
for␈αthe␈αnew␈α
machine,␈αand␈α
then␈αthe␈αnew␈α
system␈αwould␈αbe␈α
set␈αin␈αmotion␈α
by
␈↓ α∧␈↓activating␈α
one␈α
of␈α
its␈α
cells.␈α
The␈α
new␈α
system␈α
would␈α
then␈α
move␈α
away␈α
from␈α
its␈α
mother,␈α
and␈αthe␈α
process
␈↓ α∧␈↓would␈α∀start␈α∪over.␈α∀ The␈α∪purpose␈α∀of␈α∪the␈α∀design␈α∪was␈α∀to␈α∪demonstrate␈α∀that␈α∀arbitrarily␈α∪complex
␈↓ α∧␈↓configurations␈α∞could␈α∂be␈α∞self-reproducing␈α∂-␈α∞the␈α∞complexity␈α∂being␈α∞assured␈α∂by␈α∞also␈α∂requiring␈α∞that
␈↓ α∧␈↓they be universal computers.

␈↓ α∧␈↓␈↓ αTSince␈α_von␈α↔Neumann's␈α_time,␈α↔simpler␈α_basic␈α↔cells␈α_admitting␈α_self-reproducing␈α↔universal
␈↓ α∧␈↓computers␈α∞have␈α∞been␈α∞discovered.␈α∂ The␈α∞simplest␈α∞so␈α∞far␈α∞is␈α∂the␈α∞two␈α∞state␈α∞Life␈α∞automaton␈α∂of␈α∞John
␈↓ α∧␈↓Conway␈α(Gosper␈α1976).␈α The␈αstate␈αof␈α
a␈αcell␈αat␈αtime␈α␈↓↓t+1␈↓␈αis␈αdetermined␈α
by␈αits␈αstate␈αat␈αtime␈α␈↓↓t␈↓␈αand␈α
the
␈↓ α∧␈↓states␈α
of␈α
its␈α
eight␈α
neighbors␈αat␈α
time␈α
␈↓↓t.␈↓␈α
Namely,␈α
a␈αpoint␈α
whose␈α
state␈α
is␈α
0␈αwill␈α
change␈α
to␈α
state␈α
1␈αif
␈↓ α∧␈↓exactly␈αthree␈αof␈αits␈αneighbors␈αare␈αin␈αstate␈α1.␈α A␈αpoint␈αwhose␈αstate␈αis␈α1␈αwill␈αremain␈αin␈αstate␈α1␈αif␈αtwo
␈↓ α∧␈↓or three of its neighbors are in state 1.  In all other cases the state becomes or remains 0.

␈↓ α∧␈↓␈↓ αTAlthough␈α∞this␈α∂was␈α∞not␈α∞Conway's␈α∂reason␈α∞for␈α∞introducing␈α∂them,␈α∞Conway␈α∞and␈α∂Gosper␈α∞have
␈↓ α∧␈↓shown that self-reproducing universal computers could be built up as Life configurations.

␈↓ α∧␈↓␈↓ αTConsider␈α⊃a␈α⊂number␈α⊃of␈α⊂such␈α⊃self-reproducing␈α⊂universal␈α⊃computers␈α⊂operating␈α⊃in␈α⊃the␈α⊂Life
␈↓ α∧␈↓plane,␈αand␈α
suppose␈αthat␈α
they␈αhave␈α
been␈αprogrammed␈αto␈α
study␈αthe␈α
properties␈αof␈α
their␈αworld␈αand␈α
to
␈↓ α∧␈↓␈↓ f13


␈↓ α∧␈↓communicate␈α≠among␈α≠themselves␈α≠about␈α≠it␈α≠and␈α≠pursue␈α≠various␈α≠goals␈α≤co-operatively␈α≠and
␈↓ α∧␈↓competitively.␈α∪ Call␈α∪these␈α∪configurations␈α∪Life␈α∪robots.␈α∪ In␈α∪some␈α∪respects␈α∪their␈α∪intellectual␈α∪and
␈↓ α∧␈↓scientific␈α
problems␈α
will␈α
be␈α
like␈α
ours,␈α
but␈α
in␈α
one␈α
major␈α
respect␈α
they␈α
live␈α
in␈α
a␈α
simpler␈α∞world␈α
than
␈↓ α∧␈↓ours␈α
seems␈α
to␈α
be.␈α Namely,␈α
the␈α
fundamental␈α
physics␈αof␈α
their␈α
world␈α
is␈αthat␈α
of␈α
the␈α
life␈αautomaton,
␈↓ α∧␈↓and␈α⊃there␈α⊃is␈α⊃no␈α⊃obstacle␈α⊂to␈α⊃each␈α⊃robot␈α⊃␈↓↓knowing␈↓␈α⊃this␈α⊂physics,␈α⊃and␈α⊃being␈α⊃able␈α⊃to␈α⊃simulate␈α⊂the
␈↓ α∧␈↓evolution␈αof␈αa␈αlife␈αconfiguration␈αgiven␈αthe␈αinitial␈αstate.␈α Moreover,␈αif␈αthe␈αinitial␈αstate␈αof␈αthe␈αrobot
␈↓ α∧␈↓world␈α∞is␈α∞finite␈α
it␈α∞can␈α∞have␈α
been␈α∞recorded␈α∞in␈α
each␈α∞robot␈α∞in␈α
the␈α∞beginning␈α∞or␈α
else␈α∞recorded␈α∞on␈α
a
␈↓ α∧␈↓strip␈αof␈αcells␈αthat␈αthe␈αrobots␈αcan␈αread.␈α (The␈αinfinite␈αregress␈αof␈αhaving␈αto␈αdescribe␈αthe␈αdescription
␈↓ α∧␈↓is␈αavoided␈αby␈αproviding␈αthat␈αthe␈αdescription␈αis␈αnot␈αseparately␈αdescribed,␈αbut␈αcan␈αbe␈αread␈α␈↓↓both␈↓␈αas␈α
a
␈↓ α∧␈↓description of the world ␈↓↓and␈↓ as a description of itself.)

␈↓ α∧␈↓␈↓ αTSince␈α∂these␈α∂robots␈α∞know␈α∂the␈α∂initial␈α∞state␈α∂of␈α∂their␈α∞world␈α∂and␈α∂its␈α∞laws␈α∂of␈α∂motion,␈α∂they␈α∞can
␈↓ α∧␈↓simulate␈αas␈αmuch␈αof␈αits␈αhistory␈αas␈αthey␈αwant,␈αassuming␈αthat␈αeach␈αcan␈αgrow␈αinto␈αunoccupied␈αspace
␈↓ α∧␈↓so␈α∩as␈α∩to␈α⊃have␈α∩memory␈α∩to␈α⊃store␈α∩the␈α∩states␈α⊃of␈α∩the␈α∩world␈α⊃being␈α∩simulated.␈α∩ This␈α∩simulation␈α⊃is
␈↓ α∧␈↓necessarily␈αslower␈α
than␈αreal␈α
time,␈αso␈αthey␈α
can␈αnever␈α
catch␈αup␈αwith␈α
the␈αpresent␈α
-␈αlet␈α
alone␈αpredict
␈↓ α∧␈↓the␈αfuture.␈α
 This␈αis␈αobvious␈α
if␈αthe␈αsimulation␈α
is␈αcarried␈α
out␈αstraightforwardly␈αby␈α
updating␈αa␈αlist␈α
of
␈↓ α∧␈↓currently␈αactive␈α
cells␈αin␈αthe␈α
simulated␈αworld␈αaccording␈α
to␈αthe␈αLife␈α
rule,␈αbut␈αit␈α
also␈αapplies␈α
to␈αany
␈↓ α∧␈↓clever␈αmathematical␈α
method␈αthat␈αmight␈α
predict␈αmillions␈αof␈α
steps␈αahead␈α
so␈αlong␈αas␈α
it␈αis␈αsupposed␈α
to
␈↓ α∧␈↓be␈α∪applicable␈α∪to␈α∪all␈α∪Life␈α∪configurations.␈α∪ (Some␈α∪Life␈α∪configurations,␈α∪e.g.␈α∪static␈α∪ones␈α∪or␈α∩ones
␈↓ α∧␈↓containing␈αsingle␈α␈↓↓gliders␈↓␈αor␈α␈↓↓cannon␈↓␈αcan␈αhave␈αtheir␈αdistant␈αfutures␈αpredicted␈αwith␈αlittle␈αcomputing.)
␈↓ α∧␈↓Namely,␈αif␈αthere␈αwere␈αan␈αalgorithm␈αfor␈αsuch␈α
prediction,␈αa␈αrobot␈αcould␈αbe␈αmade␈αthat␈αwould␈α
predict
␈↓ α∧␈↓its␈α∞own␈α∞future␈α
and␈α∞then␈α∞disobey␈α∞the␈α
prediction.␈α∞ The␈α∞detailed␈α∞proof␈α
would␈α∞be␈α∞analogous␈α∞to␈α
the
␈↓ α∧␈↓proof of unsolvability of the halting problem for Turing machines.

␈↓ α∧␈↓␈↓ αTNow␈α
we␈αcome␈α
to␈α
the␈αpoint␈α
of␈α
this␈αlong␈α
disquisition.␈α
 Suppose␈αwe␈α
wish␈α
to␈αprogram␈α
a␈αrobot␈α
to
␈↓ α∧␈↓be␈αsuccessful␈αin␈αthe␈αLife␈αworld␈αin␈αcompetition␈αor␈αco-operation␈αwith␈αthe␈αothers.␈α Without␈αany␈αidea
␈↓ α∧␈↓of␈αhow␈αto␈αgive␈αa␈αmathematical␈αproof,␈αI␈αwill␈αclaim␈αthat␈αour␈αrobot␈αwill␈αneed␈αprograms␈α
that␈αascribe
␈↓ α∧␈↓purposes␈α∂and␈α∂beliefs␈α∞to␈α∂its␈α∂fellow␈α∞robots␈α∂and␈α∂predict␈α∂how␈α∞they␈α∂will␈α∂react␈α∞to␈α∂its␈α∂own␈α∂actions␈α∞by
␈↓ α∧␈↓assuming␈α∂that␈α∂␈↓↓they␈α∞will␈α∂act␈α∂in␈α∞ways␈α∂that␈α∂they␈α∞believe␈α∂will␈α∂achieve␈α∞their␈α∂goals␈↓.␈α∂ Our␈α∂robot␈α∞might
␈↓ α∧␈↓acquire␈α
these␈α
mental␈α
theories␈αin␈α
several␈α
ways:␈α
First,␈α
we␈αmight␈α
design␈α
the␈α
universal␈α
machine␈αso␈α
that
␈↓ α∧␈↓they␈αare␈αpresent␈α
in␈αthe␈αinitial␈α
configuration␈αof␈αthe␈α
world.␈α Second,␈αwe␈α
might␈αprogram␈αit␈αto␈α
acquire
␈↓ α∧␈↓these␈α∩ideas␈α∩by␈α∩induction␈α∩from␈α∩its␈α∪experience␈α∩and␈α∩even␈α∩transmit␈α∩them␈α∩to␈α∩others␈α∪through␈α∩an
␈↓ α∧␈↓"educational␈αsystem".␈α Third,␈αit␈αmight␈αderive␈αthe␈αpsychological␈αlaws␈αfrom␈αthe␈αfundamental␈αphysics
␈↓ α∧␈↓of␈αthe␈αworld␈αand␈αits␈αknowledge␈αof␈αthe␈αinitial␈αconfiguration.␈α Finally,␈αit␈αmight␈αdiscover␈αhow␈αrobots
␈↓ α∧␈↓are built from Life cells by doing experimental "biology".

␈↓ α∧␈↓␈↓ αTKnowing␈α∩the␈α∩Life␈α∩physics␈α∩without␈α∪some␈α∩information␈α∩about␈α∩the␈α∩initial␈α∪configuration␈α∩is
␈↓ α∧␈↓insufficient␈α
to␈α
derive␈α
the␈α␈↓↓psychological␈↓␈α
laws,␈α
because␈α
robots␈α
can␈αbe␈α
constructed␈α
in␈α
the␈α
Life␈αworld␈α
in
␈↓ α∧␈↓an␈αinfinity␈αof␈αways.␈α This␈αfollows␈αfrom␈αthe␈α"folk␈αtheorem"␈αthat␈αthe␈αLife␈αautomaton␈αis␈αuniversal␈αin
␈↓ α∧␈↓the␈α∞sense␈α∞that␈α∞any␈α
cellular␈α∞automaton␈α∞can␈α∞be␈α∞constructed␈α
by␈α∞taking␈α∞sufficiently␈α∞large␈α∞squares␈α
of
␈↓ α∧␈↓Life cells as the basic cell of the other automaton.␈↓∧9␈↓

␈↓ α∧␈↓␈↓ αTMen␈α⊂are␈α⊃in␈α⊂a␈α⊂more␈α⊃difficult␈α⊂intellectual␈α⊂position␈α⊃than␈α⊂Life␈α⊂robots.␈α⊃ We␈α⊂don't␈α⊃know␈α⊂the
␈↓ α∧␈↓fundamental␈α⊂physics␈α⊂of␈α∂our␈α⊂world,␈α⊂and␈α⊂we␈α∂can't␈α⊂even␈α⊂be␈α⊂sure␈α∂that␈α⊂its␈α⊂fundamental␈α⊂physics␈α∂is
␈↓ α∧␈↓describable␈α∂in␈α∂finite␈α⊂terms.␈α∂ Even␈α∂if␈α∂we␈α⊂knew␈α∂the␈α∂physical␈α∂laws,␈α⊂they␈α∂seem␈α∂to␈α⊂preclude␈α∂precise
␈↓ α∧␈↓knowledge␈α∞of␈α
an␈α∞initial␈α∞state␈α
and␈α∞precise␈α∞calculation␈α
of␈α∞its␈α∞future␈α
both␈α∞for␈α∞quantum␈α
mechanical
␈↓ α∧␈↓reasons␈α∩and␈α⊃because␈α∩the␈α∩continuous␈α⊃functions␈α∩needed␈α⊃to␈α∩represent␈α∩fields␈α⊃seem␈α∩to␈α∩involve␈α⊃an
␈↓ α∧␈↓infinite amount of information.
␈↓ α∧␈↓␈↓ f14


␈↓ α∧␈↓␈↓ αTThis␈αexample␈αsuggests␈α
that␈αmuch␈αof␈αhuman␈α
mental␈αstructure␈αis␈α
not␈αan␈αaccident␈αof␈α
evolution
␈↓ α∧␈↓or␈αeven␈αof␈αthe␈αphysics␈αof␈αour␈αworld,␈αbut␈αis␈αrequired␈αfor␈αsuccessful␈αproblem␈αsolving␈αbehavior␈αand
␈↓ α∧␈↓must be designed into or evolved by any system that exhibits such behavior.



␈↓ α∧␈↓␈↓ αT3.␈α→␈↓αComputer␈α→time-sharing␈α→systems.␈↓␈α→These␈α→complicated␈α→computer␈α→programs␈α_allocate
␈↓ α∧␈↓computer␈αtime␈αand␈α
other␈αresources␈αamong␈α
users.␈α They␈αallow␈α
each␈αuser␈αof␈α
the␈αcomputer␈αto␈α
behave
␈↓ α∧␈↓as␈αthough␈αhe␈α
had␈αa␈αcomputer␈αof␈α
his␈αown,␈αbut␈αalso␈α
allow␈αthem␈αto␈αshare␈α
files␈αof␈αdata␈αand␈α
programs
␈↓ α∧␈↓and␈α
to␈α
communicate␈α
with␈α
each␈α
other.␈α
 They␈α
are␈α
often␈α
used␈α
for␈α
many␈α
years␈α
with␈α∞continual␈α
small
␈↓ α∧␈↓changes,␈α
and␈α
and␈α∞the␈α
people␈α
making␈α∞the␈α
changes␈α
and␈α
correcting␈α∞errors␈α
are␈α
often␈α∞different␈α
from
␈↓ α∧␈↓the␈αoriginal␈αauthors␈αof␈αthe␈αsystem.␈α A␈α
person␈αconfronted␈αwith␈αthe␈αtask␈αof␈αcorrecting␈αa␈α
malfunction
␈↓ α∧␈↓or␈α
making␈α
a␈α
change␈α∞in␈α
a␈α
time-sharing␈α
system␈α
often␈α∞can␈α
conveniently␈α
use␈α
a␈α
mentalistic␈α∞model␈α
of
␈↓ α∧␈↓the system.

␈↓ α∧␈↓␈↓ αTThus␈α⊂suppose␈α∂a␈α⊂user␈α∂complains␈α⊂that␈α⊂the␈α∂system␈α⊂will␈α∂not␈α⊂run␈α∂his␈α⊂program.␈α⊂ Perhaps␈α∂the
␈↓ α∧␈↓system␈αbelieves␈α
that␈αhe␈αdoesn't␈α
want␈αto␈α
run,␈αperhaps␈αit␈α
persistently␈αbelieves␈αthat␈α
he␈αhas␈α
just␈αrun,
␈↓ α∧␈↓perhaps␈αit␈αbelieves␈αthat␈αhis␈αquota␈αof␈αcomputer␈αresources␈αis␈αexhausted,␈αor␈αperhaps␈αit␈αbelieves␈αthat
␈↓ α∧␈↓his␈αprogram␈αrequires␈αa␈αresource␈αthat␈αis␈αunavailable.␈α Testing␈αthese␈αhypotheses␈αcan␈αoften␈α
be␈αdone
␈↓ α∧␈↓with surprisingly little understanding of the internal workings of the program.



␈↓ α∧␈↓␈↓ αT4.␈α∩␈↓αPrograms␈α⊃designed␈α∩to␈α⊃reason.␈↓␈α∩Suppose␈α∩we␈α⊃explicitly␈α∩design␈α⊃a␈α∩program␈α∩to␈α⊃represent
␈↓ α∧␈↓information␈αby␈αsentences␈αin␈αa␈αcertain␈αlanguage␈αstored␈αin␈αthe␈αmemory␈αof␈αthe␈αcomputer␈αand␈αdecide
␈↓ α∧␈↓what␈αto␈α
do␈αby␈α
making␈αinferences,␈αand␈α
doing␈αwhat␈α
it␈αconcludes␈αwill␈α
advance␈αits␈α
goals.␈α Naturally,
␈↓ α∧␈↓we␈αwould␈αhope␈α
that␈αour␈αprevious␈α
second␈αorder␈αdefinition␈α
of␈αbelief␈αwill␈α
"approve␈αof"␈αa␈α␈↓↓B(p,s)␈↓␈α
that
␈↓ α∧␈↓ascribed␈α∩to␈α∩the␈α⊃program␈α∩believing␈α∩the␈α⊃sentences␈α∩explicitly␈α∩built␈α⊃in.␈α∩ We␈α∩would␈α∩be␈α⊃somewhat
␈↓ α∧␈↓embarassed␈αif␈αsomeone␈αwere␈αto␈αshow␈αthat␈α
our␈αsecond␈αorder␈αdefinition␈αapproved␈αas␈αwell␈α
or␈αbetter
␈↓ α∧␈↓of an entirely different set of beliefs.

␈↓ α∧␈↓␈↓ αTSuch a program was first proposed in (McCarthy 1959), and here is how it might work:

␈↓ α∧␈↓␈↓ αTInformation␈αabout␈αthe␈αworld␈αis␈αstored␈αin␈αa␈αwide␈αvariety␈αof␈αdata␈αstructures.␈α For␈αexample,␈αa
␈↓ α∧␈↓visual␈α∂scene␈α∂received␈α∂by␈α∂a␈α∂TV␈α∂camera␈α∂may␈α∂be␈α∂represented␈α∂by␈α∂a␈α∂512x512x3␈α∂array␈α⊂of␈α∂numbers
␈↓ α∧␈↓representing␈αthe␈αintensities␈αof␈αthree␈α
colors␈αat␈αthe␈αpoints␈αof␈α
the␈αvisual␈αfield.␈α At␈αanother␈α
level,␈αthe
␈↓ α∧␈↓same␈αscene␈α
may␈αbe␈α
represented␈αby␈α
a␈αlist␈α
of␈αregions,␈αand␈α
at␈αa␈α
further␈αlevel␈α
there␈αmay␈α
be␈αa␈α
list␈αof
␈↓ α∧␈↓physical␈α
objects␈α
and␈αtheir␈α
parts␈α
together␈αwith␈α
other␈α
information␈αabout␈α
these␈α
objects␈αobtained␈α
from
␈↓ α∧␈↓non-visual␈αsources.␈α Moreover,␈αinformation␈αabout␈αhow␈α
to␈αsolve␈αvarious␈αkinds␈αof␈αproblems␈αmay␈α
be
␈↓ α∧␈↓represented by programs in some programming language.

␈↓ α∧␈↓␈↓ αTHowever,␈α⊂all␈α⊂the␈α⊂above␈α∂representations␈α⊂are␈α⊂subordinate␈α⊂to␈α∂a␈α⊂collection␈α⊂of␈α⊂sentences␈α⊂in␈α∂a
␈↓ α∧␈↓suitable␈α∂first␈α∂order␈α∂language␈α∞that␈α∂includes␈α∂set␈α∂theory.␈α∂ By␈α∞subordinate,␈α∂we␈α∂mean␈α∂that␈α∂there␈α∞are
␈↓ α∧␈↓sentences␈α
that␈α
tell␈α
what␈α
the␈α
data␈α
structures␈α
represent␈α
and␈α
what␈α
the␈α
programs␈α
do.␈α
 New␈αsentences
␈↓ α∧␈↓can␈αarise␈αby␈αa␈αvariety␈αof␈αprocesses:␈αinference␈αfrom␈αsentences␈αalready␈αpresent,␈αby␈αcomputation␈αfrom
␈↓ α∧␈↓the␈α∨data␈α∨structures␈α∨representing␈α∨observations,␈α∨and␈α∨by␈α∨interpreting␈α∨certain␈α inputs␈α∨as
␈↓ α∧␈↓communications in a one or more languages.

␈↓ α∧␈↓␈↓ αTThe␈αconstruction␈αof␈α
such␈αa␈αprogram␈αis␈α
one␈αof␈αthe␈αmajor␈α
approaches␈αto␈αachieving␈αhigh␈α
level
␈↓ α∧␈↓␈↓ f15


␈↓ α∧␈↓artificial␈αintelligence,␈αand,␈αlike␈αevery␈αother␈αapproach,␈αit␈αfaces␈αnumerous␈αobstacles.␈α These␈αobstacles
␈↓ α∧␈↓can␈αbe␈αdivided␈αinto␈αtwo␈αclasses␈α-␈α␈↓↓epistemological␈↓␈αand␈α␈↓↓heuristic.␈↓␈αThe␈αepistemological␈αproblem␈αis␈αto
␈↓ α∧␈↓determine␈αwhat␈αinformation␈αabout␈αthe␈αworld␈αis␈αto␈αbe␈αrepresented␈αin␈αthe␈αsentences␈αand␈αother␈αdata
␈↓ α∧␈↓structures,␈α
and␈αthe␈α
heuristic␈αproblem␈α
is␈αto␈α
decide␈αhow␈α
the␈αinformation␈α
can␈αbe␈α
used␈α
effectively␈αto
␈↓ α∧␈↓solve␈α∞problems.␈α∞ Naturally,␈α∞the␈α∞problems␈α∞interact,␈α
but␈α∞the␈α∞epistemological␈α∞problem␈α∞is␈α∞more␈α
basic
␈↓ α∧␈↓and␈αalso␈αmore␈αrelevant␈αto␈αour␈αpresent␈αconcerns.␈α We␈αcould␈αregard␈αit␈αas␈αsolved␈αif␈αwe␈αknew␈αhow␈αto
␈↓ α∧␈↓express␈αthe␈αinformation␈αneeded␈α
for␈αintelligent␈αbehavior␈αso␈α
that␈αthe␈αsolution␈αto␈α
problems␈αlogically
␈↓ α∧␈↓followed from the data.  The heuristic problem of actually obtaining the solutions would remain.

␈↓ α∧␈↓␈↓ αTThe␈α
information␈αto␈α
be␈αrepresented␈α
can␈α
be␈αroughly␈α
divided␈αinto␈α
general␈α
information␈αabout
␈↓ α∧␈↓the␈α⊗world␈α↔and␈α⊗information␈α⊗about␈α↔particular␈α⊗situations.␈α⊗ The␈α↔formalism␈α⊗used␈α↔to␈α⊗represent
␈↓ α∧␈↓information␈α∪about␈α∪the␈α∪world␈α∪must␈α∪be␈α∪␈↓↓epistemologically␈↓␈α∪␈↓↓adequate,␈↓␈α∪i.e.␈α∪it␈α∪must␈α∪be␈α∪capable␈α∪of
␈↓ α∧␈↓representing␈αthe␈αinformation␈αthat␈αis␈αactually␈αavailable␈αto␈αthe␈αprogram␈αfrom␈αits␈α
sensory␈αapparatus
␈↓ α∧␈↓or␈αcan␈αbe␈αdeduced.␈α Thus␈αit␈αcouldn't␈αhandle␈α
available␈αinformation␈αabout␈αa␈αcup␈αof␈αhot␈αcoffee␈αif␈α
its
␈↓ α∧␈↓only␈αway␈αof␈αrepresenting␈αinformation␈αabout␈αfluids␈αwas␈αin␈αterms␈αof␈αthe␈αpositions␈αand␈α
velocities␈αof
␈↓ α∧␈↓the␈αmolecules.␈α Even␈αthe␈αhydrodynamicist's␈αEulerian␈αdistributions␈αof␈αdensity,␈αvelocity,␈αtemperature
␈↓ α∧␈↓and␈α∀pressure␈α∪would␈α∀be␈α∪useless␈α∀for␈α∪representing␈α∀the␈α∪information␈α∀actually␈α∪obtainable␈α∀from␈α∪a
␈↓ α∧␈↓television camera.  These considerations are further discussed in (McCarthy and Hayes 1969).

␈↓ α∧␈↓␈↓ αTHere are some of the kinds of general information that will have to be represented:

␈↓ α∧␈↓␈↓ αT1.␈α
Narrative.␈α
 Events␈αoccur␈α
in␈α
space␈α
and␈αtime.␈α
 Some␈α
events␈αare␈α
extended␈α
in␈α
time.␈α Partial
␈↓ α∧␈↓information␈α
must␈α
be␈α
expressed␈α
about␈α
what␈α∞events␈α
begin␈α
or␈α
end␈α
during,␈α
before␈α
and␈α∞after␈α
others.
␈↓ α∧␈↓Partial␈α∩information␈α∩about␈α∪places␈α∩and␈α∩their␈α∩spacial␈α∪relations␈α∩must␈α∩be␈α∪expressible.␈α∩ Sometimes
␈↓ α∧␈↓dynamic␈α∂information␈α∞such␈α∂as␈α∂velocities␈α∞are␈α∂better␈α∂known␈α∞than␈α∂the␈α∂space-time␈α∞facts␈α∂in␈α∂terms␈α∞of
␈↓ α∧␈↓which they are defined.

␈↓ α∧␈↓␈↓ αT2.␈α∃Partial␈α∃information␈α∃about␈α∃causal␈α∃systems.␈α∃ Quantities␈α∃have␈α∃values␈α∃and␈α∃later␈α∃have
␈↓ α∧␈↓different values.  Causal laws relate these values.

␈↓ α∧␈↓␈↓ αT3.␈αSome␈αchanges␈αare␈αresults␈αof␈αactions␈αby␈αthe␈αprogram␈αand␈αother␈αactors.␈α Information␈αabout
␈↓ α∧␈↓the effects of actions can be used to determine what goals can be achieved in given circumstances.

␈↓ α∧␈↓␈↓ αT4.␈αObjects␈αand␈αsubstances␈αhave␈α
locations␈αin␈αspace.␈α It␈αmay␈α
be␈αthat␈αtemporal␈αand␈αcausal␈α
facts
␈↓ α∧␈↓are prior to spatial facts in the formalism.

␈↓ α∧␈↓␈↓ αT5. Some objects are actors with beliefs, purposes and intentions.

␈↓ α∧␈↓␈↓ αTOf␈α∞course,␈α∞the␈α
above␈α∞English␈α∞description␈α
is␈α∞no␈α∞substitute␈α
for␈α∞an␈α∞axiomatized␈α∞formalism␈α
-
␈↓ α∧␈↓not␈α⊂even␈α⊂for␈α⊃philosophy␈α⊂but␈α⊂␈↓↓a␈α⊃fortiori␈↓␈α⊂when␈α⊂computer␈α⊂programs␈α⊃must␈α⊂be␈α⊂written.␈α⊃ The␈α⊂main
␈↓ α∧␈↓difficulties␈α∞in␈α∂designing␈α∞such␈α∞a␈α∂formalism␈α∞involve␈α∞deciding␈α∂how␈α∞to␈α∞express␈α∂partial␈α∞information.
␈↓ α∧␈↓(McCarthy␈α∞and␈α∞Hayes␈α∞1969)␈α∞uses␈α∞a␈α∞notion␈α∂of␈α∞␈↓↓situation␈↓␈α∞wherein␈α∞the␈α∞situation␈α∞is␈α∞never␈α∂known␈α∞-
␈↓ α∧␈↓only␈αfacts␈αabout␈αsituations␈αare␈αknown.␈α Unfortunately,␈αthe␈αformalism␈αis␈αnot␈αsuitable␈αfor␈α
expressing
␈↓ α∧␈↓what␈αmight␈αbe␈αknown␈αwhen␈αevents␈αare␈αtaking␈αplace␈αin␈αparallel␈αwith␈αunknown␈αtemporal␈αrelations.
␈↓ α∧␈↓It␈α⊃also␈α⊃only␈α⊃treats␈α⊃the␈α⊃case␈α⊃in␈α⊃which␈α⊃the␈α⊂result␈α⊃of␈α⊃an␈α⊃action␈α⊃is␈α⊃a␈α⊃definite␈α⊃new␈α⊃situation␈α⊂and
␈↓ α∧␈↓therefore is isn't suitable for describing continuous processes.
␈↓ α∧␈↓␈↓ f16


␈↓ α∧␈↓α␈↓ ∧j"GLOSSARY" OF MENTAL QUALITIES

␈↓ α∧␈↓␈↓ αTIn␈α
this␈α
section␈α
we␈α∞give␈α
short␈α
"definitions"␈α
for␈α
machines␈α∞of␈α
a␈α
collection␈α
of␈α∞mental␈α
qualities.
␈↓ α∧␈↓We␈αinclude␈αa␈αnumber␈αof␈α
terms␈αwhich␈αgive␈αus␈αdifficulty␈α
with␈αan␈αindication␈αof␈αwhat␈αthe␈α
difficulties
␈↓ α∧␈↓seem to be.  We emphasize the place of these concepts in the design of intelligent robots.

␈↓ α∧␈↓1.␈α
␈↓αIntrospection␈αand␈α
self-knowledge␈↓.␈α We␈α
say␈αthat␈α
a␈αmachine␈α
introspects␈αwhen␈α
it␈αcomes␈α
to␈αhave
␈↓ α∧␈↓beliefs␈α
about␈α∞its␈α
own␈α∞mental␈α
state.␈α∞ A␈α
simple␈α∞form␈α
of␈α∞introspection␈α
takes␈α∞place␈α
when␈α∞a␈α
program
␈↓ α∧␈↓determines␈α
whether␈αit␈α
has␈α
certain␈αinformation␈α
and␈αif␈α
not␈α
asks␈αfor␈α
it.␈α Often␈α
an␈α
operating␈αsystem
␈↓ α∧␈↓will␈α
compute␈α
a␈α
check␈αsum␈α
of␈α
itself␈α
every␈α
few␈αminutes␈α
to␈α
verify␈α
that␈αit␈α
hasn't␈α
been␈α
changed␈α
by␈αa
␈↓ α∧␈↓software or hardware malfunction.

␈↓ α∧␈↓␈↓ αTIn␈α⊂principle,␈α⊂introspection␈α⊂is␈α⊂easier␈α⊂for␈α⊂computer␈α⊂programs␈α⊂than␈α⊂for␈α⊂people,␈α⊂because␈α⊂the
␈↓ α∧␈↓entire␈α⊂memory␈α⊂in␈α⊂which␈α⊂programs␈α⊂and␈α⊂data␈α⊃are␈α⊂stored␈α⊂is␈α⊂available␈α⊂for␈α⊂inspection.␈α⊂ In␈α⊃fact,␈α⊂a
␈↓ α∧␈↓computer␈αprogram␈αcan␈αbe␈αmade␈αto␈αpredict␈αhow␈αit␈αwould␈αreact␈αto␈αparticular␈αinputs␈αprovided␈αit␈α
has
␈↓ α∧␈↓enough␈α
free␈α
storage␈α
to␈αperform␈α
the␈α
calculation.␈α
 This␈α
situation␈αsmells␈α
of␈α
paradox,␈α
and␈α
there␈αis␈α
one.
␈↓ α∧␈↓Namely,␈α∂if␈α∂a␈α∂program␈α∂could␈α∂predict␈α∂its␈α∂own␈α∂actions␈α∞in␈α∂less␈α∂time␈α∂than␈α∂it␈α∂takes␈α∂to␈α∂carry␈α∂out␈α∞the
␈↓ α∧␈↓action,␈αit␈αcould␈αrefuse␈αto␈αdo␈αwhat␈αit␈αhas␈αpredicted␈αfor␈αitself.␈α This␈αonly␈αshows␈αthat␈αself-simulation
␈↓ α∧␈↓is necessarily a slow process, and this is not surprising.

␈↓ α∧␈↓␈↓ αTHowever,␈α
present␈α
programs␈α
do␈α
little␈α
interesting␈α
introspection.␈α
 This␈α
is␈α
just␈α
a␈α
matter␈α∞of␈α
the
␈↓ α∧␈↓undeveloped␈αstate␈αof␈αartificial␈αintelligence;␈αprogrammers␈αdon't␈αyet␈αknow␈αhow␈αto␈αmake␈αa␈αcomputer
␈↓ α∧␈↓program look at itself in a useful way.

␈↓ α∧␈↓2.␈α
 ␈↓αConsciousness␈αand␈α
self-consciousness␈↓.␈α
 Suppose␈αwe␈α
wish␈α
to␈αdistinguish␈α
the␈α
self-awareness␈αof␈α
a
␈↓ α∧␈↓machine,␈α
animal␈α
or␈α
person␈α
from␈α
its␈α
awareness␈α
of␈α
other␈α
things.␈α
 We␈α
explicate␈α
awareness␈α
as␈αbelief␈α
in
␈↓ α∧␈↓certain␈αsentences,␈αso␈α
in␈αthis␈αcase␈α
we␈αare␈αwant␈αto␈α
distinguish␈αthose␈αsentences␈α
or␈αthose␈αterms␈α
in␈αthe
␈↓ α∧␈↓sentences␈α
that␈α
may␈α
be␈α
considered␈α
to␈α
be␈α
about␈α
the␈α
self.␈α
 We␈α
also␈α
don't␈α
expect␈αthat␈α
self-consciousness
␈↓ α∧␈↓will␈αbe␈αa␈αsingle␈αproperty␈αthat␈αsomething␈αeither␈αhas␈αor␈αhasn't␈αbut␈αrather␈αthere␈αwill␈αbe␈αmany␈αkinds
␈↓ α∧␈↓of self-awareness with humans posessing many of the kinds we can imagine.

␈↓ α∧␈↓␈↓ αTHere are some of the kinds of self-awareness:

␈↓ α∧␈↓␈↓ β$2.1.␈α∂Certain␈α⊂predicates␈α∂of␈α∂the␈α⊂situation␈α∂(propositional␈α∂fluents␈α⊂in␈α∂the␈α⊂terminology␈α∂of
␈↓ α∧␈↓(McCarthy␈α∂and␈α∞Hayes␈α∂1969))␈α∂are␈α∞directly␈α∂observable␈α∞in␈α∂almost␈α∂all␈α∞situations␈α∂while␈α∂others␈α∞often
␈↓ α∧␈↓must␈α∞be␈α∞inferred.␈α∞ The␈α∞almost␈α∞always␈α∂observable␈α∞fluents␈α∞may␈α∞reasonably␈α∞be␈α∞identified␈α∂with␈α∞the
␈↓ α∧␈↓senses.␈α
 Likewise␈α
the␈α
values␈α
of␈α
certain␈α
fluents␈α
are␈α
almost␈α
always␈α
under␈α
the␈α
control␈α
of␈α
the␈α
being␈α
and
␈↓ α∧␈↓can␈α⊂be␈α⊂called␈α∂motor␈α⊂parameters␈α⊂for␈α∂lack␈α⊂of␈α⊂a␈α∂common␈α⊂language␈α⊂term.␈α∂ We␈α⊂have␈α⊂in␈α⊂mind␈α∂the
␈↓ α∧␈↓positions␈α∩of␈α∩the␈α∩joints.␈α∩ Most␈α∩motor␈α⊃parameters␈α∩are␈α∩both␈α∩observable␈α∩and␈α∩controllable.␈α∩ I␈α⊃am
␈↓ α∧␈↓inclined␈αto␈α
regard␈αthe␈α
posession␈αof␈α
a␈αsubstantial␈αset␈α
of␈αsuch␈α
constantly␈αobservable␈α
or␈αcontrollable
␈↓ α∧␈↓fluents␈α
as␈αthe␈α
most␈αprimitive␈α
form␈α
of␈αself-consciousness,␈α
but␈αI␈α
have␈αno␈α
strong␈α
arguments␈αagainst
␈↓ α∧␈↓someone who wished to require more.

␈↓ α∧␈↓␈↓ β$2.2.␈αThe␈αsecond␈αlevel␈αof␈αself-consciousness␈αrequires␈αa␈αterm␈α␈↓↓I␈↓␈αin␈αthe␈αlanguage␈αdenoting
␈↓ α∧␈↓the␈αself.␈α ␈↓↓I␈↓␈αshould␈αbelong␈αto␈αthe␈αclass␈αof␈αpersistent␈αobjects␈αand␈αsome␈αof␈αthe␈αsame␈αpredicates␈αshould
␈↓ α∧␈↓be␈α∂applicable␈α∂to␈α∂it␈α∂as␈α∂are␈α∂applicable␈α∂to␈α∂other␈α∂objects.␈α∂ For␈α∂example,␈α∂like␈α∂other␈α∂objects␈α∂␈↓↓I␈↓␈α∂has␈α∂a
␈↓ α∧␈↓location␈αthat␈αcan␈αchange␈αin␈αtime.␈α ␈↓↓I␈↓␈αis␈αalso␈αvisible␈αand␈αimpenetrable␈αlike␈αother␈αobjects.␈α However,
␈↓ α∧␈↓we␈αdon't␈αwant␈αto␈αget␈αcarried␈αaway␈αin␈αregarding␈αa␈αphysical␈αbody␈αas␈αa␈αnecessary␈αcondition␈αfor␈αself-
␈↓ α∧␈↓consciousness.␈α Imagine␈α
a␈αdistributed␈α
computer␈αwhose␈α
sense␈αand␈αmotor␈α
organs␈αcould␈α
also␈αbe␈α
in␈αa
␈↓ α∧␈↓variety of places.  We don't want to exclude it from self-consciousness by definition.
␈↓ α∧␈↓␈↓ f17


␈↓ α∧␈↓␈↓ β$2.3.␈α∪The␈α∪third␈α∪level␈α∪come␈α∪when␈α∪␈↓↓I␈↓␈α∪is␈α∪regarded␈α∪as␈α∪an␈α∪actor␈α∪among␈α∪others.␈α∪ The
␈↓ α∧␈↓conditions␈αthat␈αpermit␈α␈↓↓I␈↓␈αto␈αdo␈αsomething␈αare␈αsimilar␈αto␈αthe␈αconditions␈αthat␈αpermit␈αother␈αactors␈αto
␈↓ α∧␈↓do similar things.

␈↓ α∧␈↓␈↓ β$2.4.␈α
The␈α
fourth␈α∞level␈α
requires␈α
the␈α
applicability␈α∞of␈α
predicates␈α
such␈α
as␈α∞␈↓↓believes,␈↓␈α
␈↓↓wants␈↓
␈↓ α∧␈↓and␈α␈↓↓can␈↓␈αto␈α
␈↓↓I.␈↓␈αBeliefs␈αabout␈α
past␈αsituations␈αand␈α
the␈αability␈αto␈α
hypothesize␈αfuture␈αsituations␈αare␈α
also
␈↓ α∧␈↓required for this level.

␈↓ α∧␈↓3.␈α␈↓αLanguage␈αand␈αthought␈↓.␈α Here␈αis␈αa␈αhypothesis␈αarising␈αfrom␈αartificial␈αintelligence␈αconcerning␈αthe
␈↓ α∧␈↓relation␈αbetween␈α
language␈αand␈α
thought.␈α Imagine␈αa␈α
person␈αor␈α
machine␈αthat␈αrepresents␈α
information
␈↓ α∧␈↓internally␈α
in␈α∞a␈α
huge␈α
network.␈α∞ Each␈α
node␈α
of␈α∞the␈α
network␈α
has␈α∞references␈α
to␈α
other␈α∞nodes␈α
through
␈↓ α∧␈↓relations.␈α⊃ (If␈α⊃the␈α⊃system␈α⊃has␈α⊃a␈α⊃variable␈α⊃collection␈α⊃of␈α⊃relations,␈α⊃then␈α⊃the␈α⊃relations␈α⊃have␈α⊃to␈α⊃be
␈↓ α∧␈↓represented␈αby␈αnodes,␈αand␈αwe␈αget␈αa␈αsymmetrical␈αtheory␈αif␈αwe␈αsuppose␈αthat␈αeach␈αnode␈αis␈αconnected
␈↓ α∧␈↓to␈αa␈α
set␈αof␈αpairs␈α
of␈αother␈α
nodes).␈α We␈αcan␈α
imagine␈αthis␈α
structure␈αto␈αhave␈α
a␈αlong␈α
term␈αpart␈αand␈α
also
␈↓ α∧␈↓extremely␈α⊂temporary␈α⊂parts␈α⊂representing␈α∂current␈α⊂␈↓↓thoughts␈↓.␈α⊂ Naturally,␈α⊂each␈α∂being␈α⊂has␈α⊂a␈α⊂its␈α∂own
␈↓ α∧␈↓network␈α∞depending␈α
on␈α∞its␈α∞own␈α
experience.␈α∞A␈α∞thought␈α
is␈α∞then␈α∞a␈α
temporary␈α∞node␈α∞currently␈α
being
␈↓ α∧␈↓referenced␈α⊂by␈α⊂the␈α∂mechanism␈α⊂of␈α⊂consciousness.␈α∂ Its␈α⊂meaning␈α⊂is␈α∂determined␈α⊂by␈α⊂its␈α⊂references␈α∂to
␈↓ α∧␈↓other␈αnodes␈αwhich␈αin␈αturn␈αrefer␈αto␈αyet␈αother␈αnodes.␈α Now␈αconsider␈αthe␈αproblem␈αof␈αcommunicating
␈↓ α∧␈↓a thought to another being.

␈↓ α∧␈↓␈↓ αTIts␈α
full␈α
communication␈α∞would␈α
involve␈α
transmitting␈α
the␈α∞entire␈α
network␈α
that␈α
can␈α∞be␈α
reached
␈↓ α∧␈↓from␈α⊂the␈α⊂given␈α⊂node,␈α⊂and␈α⊂this␈α⊂would␈α⊂ordinarily␈α⊂constitute␈α⊂the␈α⊂entire␈α⊂experience␈α⊂of␈α⊂the␈α⊂being.
␈↓ α∧␈↓More␈αthan␈αthat,␈αit␈αwould␈αbe␈αnecessary␈αto␈αalso␈αcommunicate␈αthe␈αprograms␈αthat␈αthat␈αtake␈αaction␈αon
␈↓ α∧␈↓the␈αbasis␈αof␈αencountering␈αcertain␈αnodes.␈α Even␈αif␈αall␈αthis␈αcould␈αbe␈αtransmitted,␈αthe␈αrecipient␈αwould
␈↓ α∧␈↓still␈α∪have␈α∩to␈α∪find␈α∩equivalents␈α∪for␈α∩the␈α∪information␈α∩in␈α∪terms␈α∩of␈α∪its␈α∩own␈α∪network.␈α∩ Therefore,
␈↓ α∧␈↓thoughts have to be translated into a public language before they can be communicated.

␈↓ α∧␈↓␈↓ αTA␈αlanguage␈αis␈αalso␈αa␈αnetwork␈αof␈αassociations␈αand␈αprograms.␈α However,␈αcertain␈αof␈αthe␈αnodes
␈↓ α∧␈↓in␈α∂this␈α∂network␈α∞(more␈α∂accurately␈α∂a␈α∞␈↓↓family␈↓␈α∂of␈α∂networks,␈α∞since␈α∂no␈α∂two␈α∞people␈α∂speak␈α∂precisely␈α∞the
␈↓ α∧␈↓same␈α
language)␈α
are␈αassociated␈α
with␈α
words␈αor␈α
set␈α
phrases.␈α Sometimes␈α
the␈α
translation␈αfrom␈α
thoughts
␈↓ α∧␈↓to␈α
sentences␈α
is␈α
easy,␈α
because␈αlarge␈α
parts␈α
of␈α
the␈α
private␈αnetworks␈α
are␈α
taken␈α
from␈α
the␈αpublic␈α
network,
␈↓ α∧␈↓and␈α
there␈αis␈α
an␈α
advantage␈αin␈α
preserving␈αthe␈α
correspondence.␈α
 However,␈αthe␈α
translation␈α
is␈αalways
␈↓ α∧␈↓approximate␈α∂(in␈α∂sense␈α∂that␈α∂still␈α∂lacks␈α∂a␈α∂technical␈α∂definition),␈α∂and␈α∂some␈α∂areas␈α∂of␈α∂experience␈α∂are
␈↓ α∧␈↓difficult␈α∪to␈α∀translate␈α∪at␈α∪all.␈α∀ Sometimes␈α∪this␈α∀is␈α∪for␈α∪intrinsic␈α∀reasons,␈α∪and␈α∀sometimes␈α∪because
␈↓ α∧␈↓particular␈αcultures␈αdon't␈αuse␈αlanguage␈αin␈αthis␈αarea.␈α (It␈αis␈αmy␈αimpression␈αthat␈αcultures␈αdiffer␈αin␈αthe
␈↓ α∧␈↓extent␈αto␈αwhich␈αinformation␈αabout␈αfacial␈α
appearance␈αthat␈αcan␈αbe␈αused␈αfor␈αrecognition␈α
is␈αverbally
␈↓ α∧␈↓transmitted).␈α According␈αto␈αthis␈αscheme,␈αthe␈α"deep␈αstructure"␈αof␈αa␈αpublicly␈αexpressible␈αthought␈αis␈αa
␈↓ α∧␈↓node␈αin␈αthe␈αpublic␈αnetwork.␈α It␈αis␈αtranslated␈αinto␈αthe␈αdeep␈αstructure␈αof␈αa␈αsentence␈αas␈αa␈αtree␈αwhose
␈↓ α∧␈↓terminal␈α
nodes␈α
are␈α
the␈α
nodes␈α
to␈α
which␈αwords␈α
or␈α
set␈α
phrases␈α
are␈α
attached.␈α
 This␈α
"deep␈αstructure"
␈↓ α∧␈↓then must be translated into a string in a spoken or written language.

␈↓ α∧␈↓␈↓ αTThe␈α
need␈αto␈α
use␈αlanguage␈α
to␈αexpress␈α
thought␈αalso␈α
applies␈αwhen␈α
we␈αhave␈α
to␈αascribe␈α
thoughts
␈↓ α∧␈↓to other beings, since we cannot put the entire network into a single sentence.

␈↓ α∧␈↓4.␈α⊂␈↓αIntentions␈↓.␈α⊃ We␈α⊂are␈α⊃tempted␈α⊂to␈α⊃say␈α⊂that␈α⊂a␈α⊃machine␈α⊂␈↓↓intends␈↓␈α⊃to␈α⊂perform␈α⊃an␈α⊂action␈α⊃when␈α⊂it
␈↓ α∧␈↓believes␈α∀it␈α∀will␈α∪and␈α∀also␈α∀believes␈α∪that␈α∀it␈α∀could␈α∪do␈α∀otherwise.␈α∀ However,␈α∪we␈α∀will␈α∀resist␈α∪this
␈↓ α∧␈↓temptation␈αand␈αpropose␈αthat␈αa␈αpredicate␈α␈↓↓intends(actor,action,state)␈↓␈αbe␈αsuitably␈αaxiomatized␈αwhere
␈↓ α∧␈↓one␈αof␈αthe␈αaxioms␈αsay␈αthat␈αthe␈αmachine␈αintends␈αthe␈αaction␈αif␈αit␈αbelieves␈αit␈αwill␈αperform␈αthe␈αaction
␈↓ α∧␈↓and␈α∞could␈α∞do␈α∞otherwise.␈α∞ Armstrong␈α∞(1968)␈α
wants␈α∞to␈α∞require␈α∞an␈α∞element␈α∞of␈α∞servo-mechanism␈α
in
␈↓ α∧␈↓␈↓ f18


␈↓ α∧␈↓order␈αthat␈αa␈αbelief␈αthat␈αan␈αaction␈αwill␈αbe␈αperformed␈αbe␈αregarded␈αas␈αan␈αintention,␈αi.e.␈αthere␈αshould
␈↓ α∧␈↓be␈α⊂a␈α⊂commitment␈α⊂to␈α⊂do␈α⊂it␈α⊂one␈α⊂way␈α∂or␈α⊂another.␈α⊂ There␈α⊂may␈α⊂be␈α⊂good␈α⊂reasons␈α⊂to␈α⊂allow␈α∂several
␈↓ α∧␈↓versions of intention to co-exist in the same formalism.

␈↓ α∧␈↓5.␈α∂␈↓αFree␈α∞will␈↓.␈α∂ When␈α∂we␈α∞program␈α∂a␈α∂computer␈α∞to␈α∂make␈α∂choices␈α∞intelligently␈α∂after␈α∂determining␈α∞its
␈↓ α∧␈↓options,␈α∞examining␈α∞their␈α∞consequences,␈α∂and␈α∞deciding␈α∞which␈α∞is␈α∂most␈α∞favorable␈α∞or␈α∞most␈α∂moral␈α∞or
␈↓ α∧␈↓whatever,␈α∪we␈α∩must␈α∪program␈α∩it␈α∪to␈α∩take␈α∪an␈α∩attitude␈α∪towards␈α∩its␈α∪freedom␈α∩of␈α∪choice␈α∩essentially
␈↓ α∧␈↓isomorphic␈αto␈α
that␈αwhich␈α
a␈αhuman␈α
must␈αtake␈α
to␈αhis␈αown.␈α
 A␈αprogram␈α
will␈αhave␈α
to␈αtake␈α
such␈αan
␈↓ α∧␈↓attitude towards another unless it knows the details of the other's construction and present state.

␈↓ α∧␈↓␈↓ αTWe␈α⊃can␈α⊃define␈α⊂whether␈α⊃a␈α⊃particular␈α⊃action␈α⊂was␈α⊃free␈α⊃or␈α⊃forced␈α⊂␈↓↓relative␈α⊃to␈α⊃a␈α⊃theory␈↓␈α⊂that
␈↓ α∧␈↓ascribes␈αbeliefs␈αand␈αwithin␈αwhich␈α
beings␈αdo␈αwhat␈αthey␈αbelieve␈α
will␈αadvance␈αtheir␈αgoals.␈α In␈αsuch␈α
a
␈↓ α∧␈↓theory,␈α∞action␈α∞is␈α∞precipitated␈α∞by␈α∞a␈α∞belief␈α∞of␈α∞the␈α∂form␈α∞␈↓↓I␈α∞should␈α∞do␈α∞X␈α∞now␈↓.␈α∞ We␈α∞will␈α∞say␈α∂that␈α∞the
␈↓ α∧␈↓action␈αwas␈αfree␈αif␈αchanging␈αthe␈αbelief␈αto␈α␈↓↓I␈αshouldn't␈αdo␈αX␈αnow␈↓␈αwould␈αhave␈αresulted␈αin␈αthe␈αaction
␈↓ α∧␈↓not␈α∞being␈α∞performed.␈α∞ This␈α∞requires␈α∞that␈α
the␈α∞theory␈α∞of␈α∞belief␈α∞have␈α∞sufficient␈α∞Cartesian␈α
product
␈↓ α∧␈↓structure␈αso␈αthat␈αchanging␈αa␈αsingle␈αbelief␈αis␈αdefined,␈αbut␈αit␈αdoesn't␈αrequire␈αdefining␈αwhat␈αthe␈αstate
␈↓ α∧␈↓of the world would be if a single belief were different.

␈↓ α∧␈↓␈↓ αTIt␈α⊃may␈α⊃be␈α⊃possible␈α⊃to␈α⊃separate␈α⊃the␈α⊃notion␈α⊂of␈α⊃a␈α⊃␈↓↓free␈α⊃action␈↓␈α⊃into␈α⊃a␈α⊃technical␈α⊃part␈α⊃and␈α⊂a
␈↓ α∧␈↓controversial␈α⊃part.␈α⊃ The␈α∩technical␈α⊃part␈α⊃would␈α⊃define␈α∩freedom␈α⊃relative␈α⊃to␈α⊃an␈α∩approximate␈α⊃co-
␈↓ α∧␈↓ordinate␈α~system␈α≠giving␈α~the␈α~necessary␈α≠Cartesian␈α~product␈α~structure.␈α≠ Relative␈α~to␈α≠the␈α~co-
␈↓ α∧␈↓ordinatization,␈α∂the␈α∂freedom␈α∞of␈α∂a␈α∂particular␈α∂action␈α∞would␈α∂be␈α∂a␈α∞technical␈α∂issue,␈α∂but␈α∂people␈α∞could
␈↓ α∧␈↓argue about whether to accept the whole co-ordinate system.

␈↓ α∧␈↓␈↓ αTThis␈α
isn't␈α
the␈α
whole␈αfree␈α
will␈α
story,␈α
because␈α
moralists␈αare␈α
also␈α
concerned␈α
with␈αwhether␈α
praise
␈↓ α∧␈↓or␈αblame␈αmay␈αbe␈αattributed␈α
to␈αa␈αchoice.␈α The␈αfollowing␈α
considerations␈αwould␈αseem␈αto␈αapply␈αto␈α
any
␈↓ α∧␈↓attempt to define the morality of actions in a way that would apply to machines:

␈↓ α∧␈↓␈↓ β$5.1.␈αThere␈αis␈αunlikely␈αto␈αbe␈αa␈αsimple␈αbehavioral␈αdefinition.␈α Instead␈αthere␈αwould␈αbe␈αa
␈↓ α∧␈↓second order definition criticizing predicates that ascribe morality to actions.

␈↓ α∧␈↓␈↓ β$5.2.␈α∩The␈α∪theory␈α∩must␈α∪contain␈α∩at␈α∩least␈α∪one␈α∩axiom␈α∪of␈α∩morality␈α∩that␈α∪is␈α∩not␈α∪just␈α∩a
␈↓ α∧␈↓statement of physical fact.  Relative to this axiom, moral judgments of actions can be factual.

␈↓ α∧␈↓␈↓ β$5.3.␈αThe␈αtheory␈αof␈αmorality␈αwill␈αpresuppose␈αa␈αtheory␈αof␈αbelief␈αin␈αwhich␈αstatements␈αof
␈↓ α∧␈↓the␈αform␈α␈↓↓"It␈αbelieved␈αthe␈αaction␈αwould␈αharm␈αsomeone"␈↓␈αare␈αdefined.␈α The␈αtheory␈αmust␈αascribe␈αbeliefs
␈↓ α∧␈↓about others' welfare and perhaps about the being's own welfare.

␈↓ α∧␈↓␈↓ β$5.4.␈α⊂It␈α⊃might␈α⊂be␈α⊃necessary␈α⊂to␈α⊃consider␈α⊂the␈α⊂machine␈α⊃as␈α⊂imbedded␈α⊃in␈α⊂some␈α⊃kind␈α⊂of
␈↓ α∧␈↓society in order to ascribe morality to its actions.

␈↓ α∧␈↓␈↓ β$5.5.␈αNo␈αpresent␈αmachines␈αadmit␈αsuch␈αa␈αbelief␈αstructure,␈αand␈αno␈αsuch␈αstructure␈αmay␈αbe
␈↓ α∧␈↓required␈α∂to␈α∂make␈α∂a␈α∂machine␈α∂with␈α∂arbitrarily␈α∂high␈α∂intelligence␈α∂in␈α∂the␈α∂sense␈α∂of␈α∞problem-solving
␈↓ α∧␈↓ability.

␈↓ α∧␈↓␈↓ β$5.6.␈α∂It␈α∂seems␈α⊂unlikely␈α∂that␈α∂morally␈α∂judgable␈α⊂machines␈α∂or␈α∂machines␈α∂to␈α⊂which␈α∂rights
␈↓ α∧␈↓might legitimately be ascribed should be made if and when it becomes possible to do so.

␈↓ α∧␈↓6.␈α
␈↓αUnderstanding␈↓.␈α
It␈α
seems␈αto␈α
me␈α
that␈α
understanding␈αthe␈α
concept␈α
of␈α
understanding␈αis␈α
fundamental
␈↓ α∧␈↓␈↓ f19


␈↓ α∧␈↓and␈αdifficult.␈α The␈α
first␈αdifficulty␈αlies␈α
in␈αdetermining␈αwhat␈α
the␈αoperand␈αis.␈α
 What␈αis␈αthe␈α"theory␈α
of
␈↓ α∧␈↓relativity"␈α∂in␈α∂␈↓↓"Pat␈α∂understands␈α∂the␈α⊂theory␈α∂of␈α∂relativity"␈↓?␈α∂ What␈α∂does␈α∂"misunderstand"␈α⊂mean?␈α∂ It
␈↓ α∧␈↓seems␈α∩that␈α∩understanding␈α∩should␈α∩involve␈α∩knowing␈α⊃a␈α∩certain␈α∩collection␈α∩of␈α∩facts␈α∩including␈α⊃the
␈↓ α∧␈↓general␈α∩laws␈α∩that␈α∩permit␈α⊃deducing␈α∩the␈α∩answers␈α∩to␈α⊃questions.␈α∩ We␈α∩probably␈α∩want␈α∩to␈α⊃separate
␈↓ α∧␈↓understanding from issues of cleverness and creativity.

␈↓ α∧␈↓7.␈α∞␈↓αCreativity␈↓.␈α∂ This␈α∞may␈α∞be␈α∂easier␈α∞than␈α∞"understanding"␈α∂at␈α∞least␈α∞if␈α∂we␈α∞confine␈α∞our␈α∂attention␈α∞to
␈↓ α∧␈↓reasoning␈α∞processes.␈α
 Many␈α∞problem␈α
solutions␈α∞involve␈α∞the␈α
introduction␈α∞of␈α
entities␈α∞not␈α∞present␈α
in
␈↓ α∧␈↓the␈α⊃statement␈α⊂of␈α⊃the␈α⊂problem.␈α⊃ For␈α⊂example,␈α⊃proving␈α⊃that␈α⊂an␈α⊃8␈α⊂by␈α⊃8␈α⊂square␈α⊃board␈α⊃with␈α⊂two
␈↓ α∧␈↓diagonally␈αopposite␈αsquares␈αremoved␈αcannot␈α
be␈αcovered␈αby␈αdominoes␈αeach␈αcovering␈α
two␈αadjacent
␈↓ α∧␈↓squares␈α∞involves␈α∂introducing␈α∞the␈α∂colors␈α∞of␈α∞the␈α∂squares␈α∞and␈α∂the␈α∞fact␈α∞that␈α∂a␈α∞dominoe␈α∂covers␈α∞two
␈↓ α∧␈↓squares␈α
of␈α∞opposite␈α
color.␈α
 We␈α∞want␈α
to␈α
regard␈α∞this␈α
as␈α
a␈α∞creative␈α
proof␈α
even␈α∞though␈α
it␈α∞might␈α
be
␈↓ α∧␈↓quite easy for an experienced combinatorist.
␈↓ α∧␈↓␈↓ f20


␈↓ α∧␈↓α␈↓ ¬(OTHER VIEWS ABOUT MIND

␈↓ α∧␈↓␈↓ αTThe␈α∞fundamental␈α∞difference␈α∞in␈α∞point␈α∞of␈α∞view␈α∞between␈α∞this␈α∞paper␈α∞and␈α∞most␈α∞philosophy␈α∞is
␈↓ α∧␈↓that␈α⊃we␈α⊃are␈α⊃motivated␈α⊃by␈α⊃the␈α⊃problem␈α⊃of␈α⊃designing␈α⊃an␈α⊃artificial␈α⊃intelligence.␈α⊃ Therefore,␈α⊃our
␈↓ α∧␈↓attitude␈α
towards␈α
a␈α
concept␈α
like␈α
␈↓↓belief␈↓␈α
is␈αdetermined␈α
by␈α
trying␈α
to␈α
decide␈α
what␈α
ways␈α
of␈αacquiring␈α
and
␈↓ α∧␈↓using␈α
beliefs␈αwill␈α
lead␈αto␈α
intelligent␈αbehavior.␈α
 Then␈αwe␈α
discover␈αthat␈α
much␈αthat␈α
one␈αintelligence
␈↓ α∧␈↓can find out about another can be expressed by ascribing beliefs to it.

␈↓ α∧␈↓␈↓ αTA␈α
negative␈α
view␈α
of␈α
empiricism␈α
seems␈α
dictated␈α
from␈α
the␈α
apparent␈α
artificiality␈α
of␈α
designing␈α
an
␈↓ α∧␈↓empiricist␈α⊃computer␈α⊃program␈α∩to␈α⊃operate␈α⊃in␈α∩the␈α⊃real␈α⊃world.␈α⊃ Namely,␈α∩we␈α⊃plan␈α⊃to␈α∩provide␈α⊃our
␈↓ α∧␈↓program␈α
with␈α
certain␈α
senses,␈αbut␈α
we␈α
have␈α
no␈α
way␈αof␈α
being␈α
sure␈α
that␈αthe␈α
world␈α
in␈α
which␈α
we␈αare
␈↓ α∧␈↓putting␈αthe␈αmachine␈α
is␈αconstructable␈αfrom␈α
the␈αsense␈αimpressions␈α
it␈αwill␈αhave.␈α
 Whether␈αit␈αwill␈α
ever
␈↓ α∧␈↓know␈α
some␈α
fact␈α
about␈α
the␈α
world␈αis␈α
contingent,␈α
so␈α
we␈α
are␈α
not␈αinclined␈α
to␈α
build␈α
into␈α
it␈α
the␈αnotion
␈↓ α∧␈↓that what it can't know about doesn't exist.

␈↓ α∧␈↓␈↓ αTThe␈αphilosophical␈α
views␈αmost␈αsympathetic␈α
to␈αour␈αapproach␈α
are␈αsome␈αexpressed␈α
by␈αCarnap
␈↓ α∧␈↓in some of the discursive sections of (Carnap 1956).

␈↓ α∧␈↓␈↓ αTHilary␈α∞Putnam␈α∞(1961)␈α∞argues␈α∞that␈α∞the␈α∞classical␈α∞mind-body␈α∞problems␈α∞are␈α∞just␈α∞as␈α∂acute␈α∞for
␈↓ α∧␈↓machines␈αas␈αfor␈αmen.␈α Some␈αof␈αhis␈αarguments␈αare␈αmore␈αexplicit␈αthan␈αany␈αgiven␈αhere,␈αbut␈α
in␈αthat
␈↓ α∧␈↓paper, he doesn't try to solve the problems for machines.

␈↓ α∧␈↓␈↓ αTD.M.␈α⊂Armstrong␈α∂(1968)␈α⊂␈↓↓"attempts␈α∂to␈α⊂show␈α∂that␈α⊂there␈α∂are␈α⊂no␈α∂valid␈α⊂philosophical␈α⊂or␈α∂logical
␈↓ α∧␈↓↓reasons␈α
for␈α
rejecting␈α
the␈α
identification␈α
of␈α
mind␈α
and␈α
brain."␈↓␈α
He␈α
does␈α
this␈α
by␈α
proposing␈αdefinitions␈α
of
␈↓ α∧␈↓mental␈α
concepts␈α∞in␈α
terms␈α∞of␈α
the␈α∞state␈α
of␈α
the␈α∞brain.␈α
 Fundamentally,␈α∞I␈α
agree␈α∞with␈α
him␈α∞and␈α
think
␈↓ α∧␈↓that␈α∞such␈α∞a␈α∞program␈α∞of␈α∂definition␈α∞can␈α∞be␈α∞carried␈α∞out,␈α∞but␈α∂it␈α∞seems␈α∞to␈α∞me␈α∞that␈α∞his␈α∂methods␈α∞for
␈↓ α∧␈↓defining␈α∞mental␈α
qualities␈α∞as␈α∞brain␈α
states␈α∞are␈α∞too␈α
weak␈α∞even␈α∞for␈α
defining␈α∞properties␈α∞of␈α
computer
␈↓ α∧␈↓programs.  While he goes beyond behavioral definitions as such, he relies on dispositional states.

␈↓ α∧␈↓␈↓ αTThis␈αpaper␈αis␈αpartly␈αan␈αattempt␈αto␈αdo␈αwhat␈αRyle␈α(1949)␈αsays␈αcan't␈αbe␈αdone␈αand␈αshouldn't␈αbe
␈↓ α∧␈↓attempted␈α∞-␈α∞namely␈α∞to␈α∞define␈α∂mental␈α∞qualities␈α∞in␈α∞terms␈α∞of␈α∂states␈α∞of␈α∞a␈α∞machine.␈α∞ The␈α∂attempt␈α∞is
␈↓ α∧␈↓based␈α
on␈α
methods␈α
of␈α
which␈α
he␈α∞would␈α
not␈α
approve;␈α
he␈α
implicitly␈α
requires␈α
first␈α∞order␈α
definitions,
␈↓ α∧␈↓and␈αhe␈αimplicitly␈αrequires␈αthat␈αdefinitions␈αbe␈αmade␈αin␈αterms␈αof␈αthe␈αstate␈αof␈αthe␈αworld␈αand␈α
not␈αin
␈↓ α∧␈↓terms of approximate theories.

␈↓ α∧␈↓␈↓ αTHis␈α∃final␈α∃view␈α∃of␈α∀the␈α∃proper␈α∃subject␈α∃matter␈α∃of␈α∀epistemology␈α∃is␈α∃too␈α∃narrow␈α∃to␈α∀help
␈↓ α∧␈↓researchers␈α∂in␈α∂artificial␈α∂intelligence.␈α∞ Namely,␈α∂we␈α∂need␈α∂help␈α∞in␈α∂expressing␈α∂those␈α∂facts␈α∂about␈α∞the
␈↓ α∧␈↓world␈αthat␈αcan␈αbe␈αobtained␈αin␈αan␈αordinary␈αsituation␈αby␈αan␈αordinary␈αperson␈αand␈αthe␈αgeneral␈αfacts
␈↓ α∧␈↓about␈αthe␈αworld␈αwill␈αenable␈αour␈αprogram␈αto␈αdecide␈αto␈αcall␈αa␈αtravel␈αagent␈αto␈αfind␈αout␈αhow␈αto␈αget␈αto
␈↓ α∧␈↓Boston.

␈↓ α∧␈↓␈↓ αTDonald␈α∃Davidson␈α∀(1973)␈α∃undertakes␈α∀to␈α∃show,␈α∀␈↓↓"There␈α∃is␈α∀no␈α∃important␈α∀sense␈α∃in␈α∀which
␈↓ α∧␈↓↓psychology␈α⊃can␈α∩be␈α⊃reduced␈α⊃to␈α∩the␈α⊃physical␈α⊃sciences"␈↓.␈α∩ He␈α⊃proceeds␈α⊃by␈α∩arguing␈α⊃that␈α∩the␈α⊃mental
␈↓ α∧␈↓qualities␈α∂of␈α∂a␈α∂hypothetical␈α∂artificial␈α∂man␈α∂could␈α∂not␈α∂be␈α∂defined␈α∂physically␈α∂even␈α∂if␈α∂we␈α∂knew␈α∂the
␈↓ α∧␈↓details of its physical structure.

␈↓ α∧␈↓␈↓ αTOne␈α∂sense␈α∂of␈α∂Davidson's␈α⊂statement␈α∂does␈α∂not␈α∂require␈α⊂the␈α∂arguments␈α∂he␈α∂gives.␈α⊂ There␈α∂are
␈↓ α∧␈↓many␈α∂universal␈α∞computing␈α∂elements␈α∂-␈α∞relays,␈α∂neurons,␈α∞gates␈α∂and␈α∂flip-flops,␈α∞and␈α∂physics␈α∂tells␈α∞us
␈↓ α∧␈↓many␈α
ways␈α
of␈α
constructing␈α
them.␈α
 Any␈α∞information␈α
processing␈α
system␈α
that␈α
can␈α
be␈α∞constructed␈α
of
␈↓ α∧␈↓␈↓ f21


␈↓ α∧␈↓one␈α
kind␈α∞of␈α
element␈α∞can␈α
be␈α∞constructed␈α
of␈α∞any␈α
other.␈α∞ Therefore,␈α
physics␈α∞tells␈α
us␈α∞nothing␈α
about
␈↓ α∧␈↓what␈α
information␈α∞processes␈α
exist␈α∞in␈α
nature␈α∞or␈α
can␈α
be␈α∞constructed.␈α
 Computer␈α∞science␈α
is␈α∞no␈α
more
␈↓ α∧␈↓reducible to physics than is psychology.

␈↓ α∧␈↓␈↓ αTHowever,␈αDavidson␈αalso␈α
argues␈αthat␈αthe␈αmental␈α
states␈αof␈αan␈α
organism␈αare␈αnot␈αdescribable␈α
in
␈↓ α∧␈↓terms␈αof␈αits␈αphysical␈αstructure,␈αand␈αI␈αtake␈αthis␈αto␈αassert␈αalso␈αthat␈αthey␈αare␈αnot␈αdescribable␈αin␈αterms
␈↓ α∧␈↓of␈α⊂its␈α⊂construction␈α∂from␈α⊂logical␈α⊂elements.␈α⊂ I␈α∂would␈α⊂take␈α⊂his␈α∂arguments␈α⊂as␈α⊂showing␈α⊂that␈α∂mental
␈↓ α∧␈↓qualities␈αdon't␈αhave␈αwhat␈αI␈αhave␈αcalled␈αfirst␈αorder␈αstructural␈αdefinitions.␈α I␈αdon't␈αthink␈αthey␈αapply
␈↓ α∧␈↓to second order definitions.

␈↓ α∧␈↓␈↓ αTD.C.␈α∞Dennett␈α∞(1971)␈α∞expresses␈α∞views␈α∞very␈α∞similar␈α∞to␈α∞mine␈α∞about␈α∞the␈α∞reasons␈α∞for␈α∞ascribing
␈↓ α∧␈↓mental␈α∪qualities␈α∪to␈α∪machines.␈α∪ However,␈α∪the␈α∪present␈α∪paper␈α∪emphasizes␈α∪criteria␈α∪for␈α∪ascribing
␈↓ α∧␈↓particular␈αmental␈αqualities␈αto␈αparticular␈αmachines␈αrather␈αthan␈αthe␈αgeneral␈αproposition␈αthat␈αmental
␈↓ α∧␈↓qualities␈α
may␈α
be␈α
ascribed.␈α
 I␈α
think␈α
that␈α
the␈α
chess␈α
programs␈α
Dennett␈α
discusses␈α
have␈α
more␈αlimited
␈↓ α∧␈↓mental␈α⊂structures␈α⊂than␈α∂he␈α⊂seems␈α⊂to␈α⊂ascribe␈α∂to␈α⊂them.␈α⊂ Thus␈α∂their␈α⊂␈↓↓beliefs␈↓␈α⊂almost␈α⊂always␈α∂concern
␈↓ α∧␈↓particular␈αpositions,␈αand␈αthey␈α
␈↓↓believe␈↓␈αalmost␈αno␈αgeneral␈α
propositions␈αabout␈αchess,␈αand␈αthis␈α
accounts
␈↓ α∧␈↓for␈α
many␈α
of␈αtheir␈α
weaknesses.␈α
 Intuitively,␈αthis␈α
is␈α
well␈α
understood␈αby␈α
researchers␈α
in␈αcomputer␈α
game
␈↓ α∧␈↓playing,␈α∂and␈α∂providing␈α∂the␈α∂program␈α∂with␈α∂a␈α∞way␈α∂of␈α∂representing␈α∂general␈α∂facts␈α∂about␈α∂chess␈α∞and
␈↓ α∧␈↓even␈α⊃general␈α⊃facts␈α⊃about␈α⊂particular␈α⊃positions␈α⊃is␈α⊃a␈α⊂major␈α⊃unsolved␈α⊃problem.␈α⊃ For␈α⊃example,␈α⊂no
␈↓ α∧␈↓present␈αprogram␈αcan␈αrepresent␈αthe␈α
assertion␈α␈↓↓"Black␈αhas␈αa␈αbackward␈α
pawn␈αon␈αhis␈αQ3␈αand␈αwhite␈α
may
␈↓ α∧␈↓↓be␈αable␈α
to␈αcramp␈αblack's␈α
position␈αby␈αputting␈α
pressure␈αon␈αit"␈↓.␈α
 Such␈αa␈αrepresentation␈α
would␈αrequire
␈↓ α∧␈↓rules␈α∞that␈α∞permit␈α∞such␈α∞a␈α∞statement␈α∞to␈α∞be␈α∞derived␈α∞in␈α∞appropriate␈α∞positions␈α∞and␈α∞would␈α∂guide␈α∞the
␈↓ α∧␈↓examination of possible moves in accordance with it.

␈↓ α∧␈↓␈↓ αTI␈α∂would␈α∂also␈α∂distinguish␈α∞between␈α∂believing␈α∂the␈α∂laws␈α∂of␈α∞logic␈α∂and␈α∂merely␈α∂using␈α∂them␈α∞(see
␈↓ α∧␈↓Dennett,␈α
p.␈α95).␈α
 The␈α
former␈αrequires␈α
a␈α
language␈αthat␈α
can␈α
express␈αsentences␈α
about␈α
sentences␈αand
␈↓ α∧␈↓which␈α
contains␈α
some␈α
kind␈αof␈α
reflexion␈α
principle.␈α
 Many␈αpresent␈α
problem␈α
solving␈α
programs␈αcan␈α
use
␈↓ α∧␈↓␈↓↓modus␈α∀ponens␈↓␈α∀but␈α∀cannot␈α∀reason␈α∀about␈α∀their␈α∀own␈α∪ability␈α∀to␈α∀use␈α∀new␈α∀facts␈α∀in␈α∀a␈α∀way␈α∪that
␈↓ α∧␈↓corresponds to believing ␈↓↓modus ponens␈↓.
␈↓ α∧␈↓␈↓ f22


␈↓ α∧␈↓αNOTES

␈↓ α∧␈↓1.␈α(McCarthy␈αand␈αHayes␈α1969)␈α
defines␈αan␈α␈↓↓epistemologically␈αadequate␈↓␈αrepresentation␈αof␈α
information
␈↓ α∧␈↓as␈αone␈αthat␈αcan␈αexpress␈αthe␈αinformation␈αactually␈αavailable␈αto␈αa␈αsubject␈αunder␈αgiven␈αcircumstances.
␈↓ α∧␈↓Thus␈αwhen␈αwe␈αsee␈αa␈αperson,␈α
parts␈αof␈αhim␈αare␈αoccluded,␈αand␈α
we␈αuse␈αour␈αmemory␈αof␈αprevious␈α
looks
␈↓ α∧␈↓at␈α
him␈αand␈α
our␈αgeneral␈α
knowledge␈α
of␈αhumans␈α
to␈αfinish␈α
of␈α
a␈α"picture"␈α
of␈αhim␈α
that␈α
includes␈αboth
␈↓ α∧␈↓two␈α_and␈α_three␈α→dimensional␈α_information.␈α_ We␈α→must␈α_also␈α_consider␈α→␈↓↓metaphysically␈α_adequate␈↓
␈↓ α∧␈↓representations␈αthat␈αcan␈αrepresent␈αcomplete␈αfacts␈αignoring␈αthe␈αsubject's␈αability␈αto␈αacquire␈αthe␈αfacts
␈↓ α∧␈↓in␈αgiven␈αcircumstances.␈α Thus␈αLaplace␈αthought␈αthat␈αthe␈αpositions␈αand␈αvelocities␈αof␈αthe␈αparticles␈αin
␈↓ α∧␈↓the␈α∨universe␈α∨gave␈α∨a␈α∨metaphysically␈α∨adequate␈α∨representation.␈α∨ Metaphysically␈α≡adequate
␈↓ α∧␈↓representations␈α∞are␈α∞needed␈α∞for␈α∞scientific␈α∞and␈α∞other␈α∞theories,␈α∞but␈α∞artificial␈α∞intelligence␈α∞and␈α∞a␈α∞full
␈↓ α∧␈↓philosophical␈α∀treatment␈α∀of␈α∀common␈α∀sense␈α∀experience␈α∀also␈α∀require␈α∀epistemologically␈α∀adequate
␈↓ α∧␈↓representations.␈α This␈αpaper␈αmight␈αbe␈αsummarized␈αas␈αcontending␈αthat␈αmental␈αconcepts␈αare␈αneeded
␈↓ α∧␈↓for␈α⊗an␈α∃epistemologically␈α⊗adequate␈α⊗representation␈α∃of␈α⊗facts␈α∃about␈α⊗machines,␈α⊗especially␈α∃future
␈↓ α∧␈↓intelligent machines.

␈↓ α∧␈↓2.␈α
Work␈α
in␈α∞artificial␈α
intelligence␈α
is␈α∞still␈α
far␈α
from␈α
showing␈α∞how␈α
to␈α
reach␈α∞human-level␈α
intellectual
␈↓ α∧␈↓performance.␈α∞Our␈α∞approach␈α∞to␈α∞the␈α∞AI␈α∞problem␈α∞involves␈α∞identifying␈α∞the␈α∞intellectual␈α
mechanisms
␈↓ α∧␈↓required␈α
for␈α
problem␈α
solving␈α
and␈α
describing␈α
them␈αprecisely.␈α
Therefore␈α
we␈α
are␈α
at␈α
the␈α
end␈α
of␈αthe
␈↓ α∧␈↓philosophical␈α⊃spectrum␈α∩that␈α⊃requires␈α⊃everything␈α∩to␈α⊃be␈α⊃formalized␈α∩in␈α⊃mathematical␈α⊃logic.␈α∩It␈α⊃is
␈↓ α∧␈↓sometimes␈α∂said␈α∂that␈α∂one␈α∂studies␈α∂philosophy␈α⊂in␈α∂order␈α∂to␈α∂advance␈α∂beyond␈α∂one's␈α⊂untutored␈α∂naive
␈↓ α∧␈↓world-view,␈α∩but␈α∩unfortunately␈α∩for␈α∪artificial␈α∩intelligence,␈α∩no-one␈α∩has␈α∪yet␈α∩been␈α∩able␈α∩to␈α∪give␈α∩a
␈↓ α∧␈↓description␈α⊃of␈α⊃even␈α⊂a␈α⊃naive␈α⊃world-view,␈α⊂complete␈α⊃and␈α⊃precise␈α⊂enough␈α⊃to␈α⊃allow␈α⊃a␈α⊂knowledge-
␈↓ α∧␈↓seeking program to be constructed in accordance with its tenets.

␈↓ α∧␈↓3.␈α
Present␈α
AI␈α
programs␈α
operate␈α
in␈α
limited␈α
domains,␈α
e.g.␈α
play␈α
particular␈α
games,␈α
prove␈α
theorems␈αin␈α
a
␈↓ α∧␈↓particular␈α
logical␈α∞system,␈α
or␈α
understand␈α∞natural␈α
language␈α
sentences␈α∞covering␈α
a␈α∞particular␈α
subject
␈↓ α∧␈↓matter␈α∞and␈α∞with␈α∞other␈α∞semantic␈α∞restrictions.␈α∞ General␈α∞intelligence␈α∞will␈α∞require␈α∞general␈α∂models␈α∞of
␈↓ α∧␈↓situations␈α
changing␈α∞in␈α
time,␈α∞actors␈α
with␈α∞goals␈α
and␈α∞strategies␈α
for␈α∞achieving␈α
them,␈α∞and␈α
knowledge
␈↓ α∧␈↓about how information can be obtained.

␈↓ α∧␈↓4.␈αOur␈αopinion␈α
is␈αthat␈αhuman␈α
intellectual␈αstructure␈αis␈α
substantially␈αdetermined␈αby␈α
the␈αintellectual
␈↓ α∧␈↓problems␈αhumans␈αface.␈α
 Thus␈αa␈αMartian␈αor␈α
a␈αmachine␈αwill␈αneed␈α
similar␈αstructures␈αto␈αsolve␈α
similar
␈↓ α∧␈↓problems.␈α∂Dennett␈α∂(1971)␈α∂expresses␈α∂similar␈α∂views.␈α∂On␈α∂the␈α∂other␈α∂hand,␈α∂the␈α∂human␈α∞motivational
␈↓ α∧␈↓structure␈α
seems␈α
to␈α
have␈α
many␈α
accidental␈α
features␈α
that␈α
might␈α
not␈α
be␈α
found␈α
in␈α
Martians␈α
and␈α
that␈α
we
␈↓ α∧␈↓would␈αnot␈αbe␈α
inclined␈αto␈αprogram␈αinto␈α
machines.␈α This␈αis␈αnot␈α
the␈αplace␈αto␈αpresent␈α
arguments␈αfor
␈↓ α∧␈↓this viewpoint.

␈↓ α∧␈↓5.␈αAfter␈αseveral␈α
versions␈αof␈αthis␈α
paper␈αwere␈αcompleted,␈α
I␈αcame␈αacross␈α
(Boden␈α1972)␈αwhich␈α
contains
␈↓ α∧␈↓(among␈α⊂other␈α⊂things)␈α⊂an␈α⊂account␈α⊂of␈α⊂the␈α⊂psychology␈α⊂of␈α⊂William␈α⊂McDougall␈α⊂(1877-1938)␈α⊂and␈α⊂a
␈↓ α∧␈↓discussion␈α⊂of␈α⊂a␈α⊂hypothetical␈α⊃program␈α⊂simulating␈α⊂it.␈α⊂ In␈α⊂my␈α⊃opinion,␈α⊂a␈α⊂psychology␈α⊂like␈α⊃that␈α⊂of
␈↓ α∧␈↓McDougall␈α⊂is␈α⊂a␈α⊂better␈α⊂candidate␈α⊂for␈α∂simulation␈α⊂than␈α⊂many␈α⊂more␈α⊂recent␈α⊂psychological␈α∂theories,
␈↓ α∧␈↓because␈αit␈αcomes␈αcloser␈αto␈αpresenting␈αa␈αtheory␈αof␈αthe␈αorganism␈αas␈αa␈αwhole,␈αproposing␈αmechanisms
␈↓ α∧␈↓for␈αthoughts,␈αgoals,␈αand␈αemotions.␈αI␈αagree␈αwith␈αthe␈αways␈αin␈αwhich␈αBoden␈α
modernizes␈αMcDougall,
␈↓ α∧␈↓but␈α∂even␈α∞with␈α∂her␈α∂improvements,␈α∞I␈α∂think␈α∂the␈α∞theory␈α∂is␈α∞a␈α∂long␈α∂way␈α∞from␈α∂being␈α∂simulatable,␈α∞let
␈↓ α∧␈↓alone␈αcorrect.␈α One␈αmajor␈αproblem␈αis␈αthat␈αcompound␈α␈↓↓sentiments␈↓,␈αto␈αuse␈αMcDougall's␈αterm,␈αsuch␈αas
␈↓ α∧␈↓reverence␈α↔are␈α↔diagrammed␈α↔in␈α↔Boden's␈α⊗book␈α↔as␈α↔essentially␈α↔Boolean␈α↔combinations␈α↔of␈α⊗their
␈↓ α∧␈↓component␈α∀emotions.␈α∪ In␈α∀reality␈α∀they␈α∪must␈α∀at␈α∪least␈α∀be␈α∀complex␈α∪patterns␈α∀formed␈α∀from␈α∪their
␈↓ α∧␈↓components␈α∂and␈α∂other␈α∂entitities.␈α∂ Thus␈α∂we␈α∂must␈α∂have␈α∂sentences␈α∂as␈α∂complex␈α∂as␈α∞␈↓↓reveres(person1,
␈↓ α∧␈↓␈↓ f23


␈↓ α∧␈↓↓concept1,␈α⊂situation)␈α⊂≡␈α⊃z.isbelief(z)␈α⊂∧␈α⊂ascribes(person1,z,concept1,situation1)␈α⊃∧␈α⊂etc␈↓.␈α⊂ If␈α⊃I'm␈α⊂right
␈↓ α∧␈↓about␈α∞this,␈α∞then␈α∞every␈α∞formulation␈α∞of␈α∞McDougall␈α∞will␈α∞have␈α∞to␈α∞be␈α∞taken␈α∞as␈α∞merely␈α∞suggestive␈α
of
␈↓ α∧␈↓what␈αterms␈αshould␈α
be␈αin␈αthe␈α
definitions␈αand␈αnot␈α
as␈αactually␈αgiving␈α
them.␈α Nevertheless,␈αit␈αseems␈α
to
␈↓ α∧␈↓me␈α⊗that␈α↔much␈α⊗can␈α⊗be␈α↔learned␈α⊗from␈α↔contemplating␈α⊗the␈α⊗simulation␈α↔of␈α⊗a␈α↔McDougall␈α⊗man.
␈↓ α∧␈↓Axiomatizing the McDougall man should come before simulating it, however.

␈↓ α∧␈↓6.␈αBehavioral␈αdefinitions␈αare␈αoften␈αfavored␈αin␈αphilosophy.␈α A␈αsystem␈αis␈αdefined␈αto␈αhave␈αa␈αcertain
␈↓ α∧␈↓quality␈α
if␈α
it␈α
behaves␈α
in␈α
a␈αcertain␈α
way␈α
or␈α
is␈α
␈↓↓disposed␈↓␈α
to␈αbehave␈α
in␈α
a␈α
certain␈α
way.␈α
Their␈α
virtue␈αis
␈↓ α∧␈↓conservatism;␈αthey␈αdon't␈αpostulate␈αinternal␈αstates␈α
that␈αare␈αunobservable␈αto␈αpresent␈αscience␈αand␈α
may
␈↓ α∧␈↓remain␈α∂unobservable.␈α⊂However,␈α∂such␈α⊂definitions␈α∂are␈α⊂awkward␈α∂for␈α⊂mental␈α∂qualities,␈α⊂because,␈α∂as
␈↓ α∧␈↓common␈α⊃sense␈α⊂suggests,␈α⊃a␈α⊂mental␈α⊃quality␈α⊂may␈α⊃not␈α⊂result␈α⊃in␈α⊂behavior,␈α⊃because␈α⊃another␈α⊂mental
␈↓ α∧␈↓quality␈α∞may␈α∂prevent␈α∞it;␈α∞e.g.␈α∂ I␈α∞may␈α∞think␈α∂you␈α∞are␈α∞thick-headed,␈α∂but␈α∞politeness␈α∞may␈α∂prevent␈α∞my
␈↓ α∧␈↓saying␈αso.␈α
Particular␈αdifficulties␈α
can␈αbe␈α
overcome,␈αbut␈α
an␈αimpression␈α
of␈αvagueness␈α
remains.␈α The
␈↓ α∧␈↓liking␈αfor␈αbehavioral␈αdefinitions␈αstems␈αfrom␈αcaution,␈αbut␈αI␈αwould␈αinterpret␈αscientific␈αexperience␈αas
␈↓ α∧␈↓showing␈α∞that␈α∞boldness␈α∞in␈α∞postulating␈α∞complex␈α∞structures␈α∞of␈α∞unobserved␈α∞entities␈α∞-␈α∞provided␈α∂it␈α∞is
␈↓ α∧␈↓accompanied␈α∃by␈α∃a␈α∃willingness␈α∃to␈α∃take␈α∃back␈α∀mistakes␈α∃-␈α∃is␈α∃more␈α∃likely␈α∃to␈α∃be␈α∃rewarded␈α∀by
␈↓ α∧␈↓understanding␈α
of␈αand␈α
control␈αover␈α
nature␈αthan␈α
is␈αpositivistic␈α
timidity.␈α It␈α
is␈αparticularly␈α
instructive
␈↓ α∧␈↓to␈αimagine␈αa␈αdetermined␈α
behaviorist␈αtrying␈αto␈αfigure␈αout␈α
an␈αelectronic␈αcomputer.␈α Trying␈αto␈α
define
␈↓ α∧␈↓each␈α⊃quality␈α⊃behaviorally␈α⊃would␈α⊃get␈α⊃him␈α⊃nowhere;␈α⊃only␈α⊃simultaneously␈α⊃postulating␈α⊃a␈α⊂complex
␈↓ α∧␈↓structure␈α∪including␈α∪memory,␈α∪arithmetic␈α∪unit,␈α∪control␈α∪structure,␈α∪and␈α∪input-output␈α∪would␈α∪yield
␈↓ α∧␈↓predictions␈α⊃that␈α⊃could␈α⊃be␈α⊃compared␈α⊃with␈α⊂experiment.␈α⊃ There␈α⊃is␈α⊃a␈α⊃sense␈α⊃in␈α⊃which␈α⊂operational
␈↓ α∧␈↓definitions␈α
are␈α
not␈αtaken␈α
seriously␈α
even␈αby␈α
their␈α
proposers.␈α Suppose␈α
someone␈α
gives␈αan␈α
operational
␈↓ α∧␈↓definition␈αof␈αlength␈α
(e.g.␈αinvolving␈αa␈αcertain␈α
platinum␈αbar),␈αand␈αa␈α
whole␈αschool␈αof␈α
physicists␈αand
␈↓ α∧␈↓philosophers␈αbecomes␈αquite␈αattached␈αto␈αit.␈α A␈αfew␈αyears␈αlater,␈αsomeone␈αelse␈αcriticizes␈αthe␈αdefinition
␈↓ α∧␈↓as␈α⊃lacking␈α∩some␈α⊃desirable␈α⊃property,␈α∩proposes␈α⊃a␈α⊃change,␈α∩and␈α⊃the␈α⊃change␈α∩is␈α⊃accepted.␈α∩ This␈α⊃is
␈↓ α∧␈↓normal,␈αbut␈αif␈αthe␈αoriginal␈αdefinition␈αexpressed␈αwhat␈αthey␈αreally␈αmeant␈αby␈αthe␈αlength,␈αthey␈αwould
␈↓ α∧␈↓refuse␈αto␈αchange,␈αarguing␈αthat␈αthe␈αnew␈αconcept␈αmay␈αhave␈αits␈αuses,␈αbut␈αit␈αisn't␈αwhat␈αthey␈αmean␈αby
␈↓ α∧␈↓"length".␈α This␈αshows␈αthat␈αthe␈αconcept␈αof␈α"length"␈α
as␈αa␈αproperty␈αof␈αobjects␈αis␈αmore␈αstable␈αthan␈α
any
␈↓ α∧␈↓operational␈αdefinition.␈α Carnap␈αhas␈αan␈αinteresting␈αsection␈αin␈α␈↓↓Meaning␈αand␈αNecessity␈↓␈αentitled␈α
"The
␈↓ α∧␈↓Concept␈αof␈αIntension␈αfor␈αa␈αRobot"␈αin␈αwhich␈αhe␈αmakes␈αa␈αsimilar␈αpoint␈αsaying,␈α␈↓↓"It␈αis␈αclear␈αthat␈αthe
␈↓ α∧␈↓↓method␈α∩of␈α∩structural␈α∩analysis,␈α⊃if␈α∩applicable,␈α∩is␈α∩more␈α⊃powerful␈α∩than␈α∩the␈α∩behavioristic␈α⊃method,
␈↓ α∧␈↓↓because␈α⊂it␈α⊂can␈α⊂supply␈α⊃a␈α⊂general␈α⊂answer,␈α⊂and,␈α⊃under␈α⊂favorable␈α⊂circumstances,␈α⊂even␈α⊃a␈α⊂complete
␈↓ α∧␈↓↓answer␈αto␈αthe␈αquestion␈αof␈αthe␈αintension␈αof␈α
a␈αgiven␈αpredicate."␈↓␈αThe␈αclincher␈αfor␈αAI,␈αhowever,␈α
is␈αan
␈↓ α∧␈↓"argument␈α
from␈αdesign".␈α
In␈αorder␈α
to␈α
produce␈αdesired␈α
behavior␈αin␈α
a␈αcomputer␈α
program,␈α
we␈αbuild
␈↓ α∧␈↓certain␈αmental␈αqualities␈αinto␈αits␈αstructure.␈α This␈αdoesn't␈αlead␈αto␈αbehavioral␈αcharacterizations␈αof␈αthe
␈↓ α∧␈↓qualities,␈α∞because␈α∂the␈α∞particular␈α∞qualities␈α∂are␈α∞only␈α∂one␈α∞of␈α∞many␈α∂ways␈α∞we␈α∞might␈α∂used␈α∞to␈α∂get␈α∞the
␈↓ α∧␈↓desired behavior, and anyway the desired behavior is not always realized.

␈↓ α∧␈↓7.␈α∀Putnam␈α∀(1970)␈α∀also␈α∪proposes␈α∀what␈α∀amount␈α∀to␈α∪second␈α∀order␈α∀definitions␈α∀for␈α∪psychological
␈↓ α∧␈↓properties.

␈↓ α∧␈↓8.␈αWhether␈α
a␈αsystem␈αhas␈α
beliefs␈αand␈αother␈α
mental␈αqualities␈α
is␈αnot␈αprimarily␈α
a␈αmatter␈αof␈α
complexity
␈↓ α∧␈↓of␈αthe␈α
system.␈α Although␈α
cars␈αare␈α
more␈αcomplex␈αthan␈α
thermostats,␈αit␈α
is␈αhard␈α
to␈αascribe␈α
beliefs␈αor
␈↓ α∧␈↓goals␈αto␈αthem,␈αand␈αthe␈αsame␈αis␈αperhaps␈αtrue␈αof␈αthe␈αbasic␈αhardware␈αof␈αa␈αcomputer,␈αi.e.␈αthe␈αpart␈αof
␈↓ α∧␈↓the computer that executes the program without the program itself.

␈↓ α∧␈↓9.␈α
 Our␈αown␈α
ability␈αto␈α
derive␈αthe␈α
laws␈αof␈α
higher␈αlevels␈α
of␈αorganization␈α
from␈αknowledge␈α
of␈αlower
␈↓ α∧␈↓level␈αlaws␈α
is␈αalso␈αlimited␈α
by␈αuniversality.␈αWhile␈α
the␈αpresently␈αaccepted␈α
laws␈αof␈αphysics␈α
allow␈αonly
␈↓ α∧␈↓one␈αchemistry,␈αthe␈αlaws␈αof␈αphysics␈αand␈αchemistry␈αallow␈αmany␈αbiologies,␈αand,␈αbecause␈αthe␈αneuron␈α
is
␈↓ α∧␈↓␈↓ f24


␈↓ α∧␈↓a␈αuniversal␈αcomputing␈αelement,␈αan␈αarbitrary␈αmental␈αstructure␈αis␈αallowed␈αby␈αbasic␈αneurophysiology.
␈↓ α∧␈↓Therefore,␈α∂to␈α⊂determine␈α∂human␈α⊂mental␈α∂structure,␈α⊂one␈α∂must␈α⊂make␈α∂psychological␈α⊂experiments,␈α∂␈↓↓or␈↓
␈↓ α∧␈↓determine␈α∂the␈α∂actual␈α∂anatomical␈α∂structure␈α∂of␈α∂the␈α∂brain␈α∂and␈α∂the␈α∂information␈α∂stored␈α∂in␈α∂it␈α∂.␈α∂One
␈↓ α∧␈↓cannot␈α∂determine␈α∂the␈α∂structure␈α∂of␈α⊂the␈α∂brain␈α∂merely␈α∂from␈α∂the␈α⊂fact␈α∂that␈α∂the␈α∂brain␈α∂is␈α⊂capable␈α∂of
␈↓ α∧␈↓certain␈α
problem␈α∞solving␈α
performance.␈α∞ In␈α
this␈α∞respect,␈α
our␈α∞position␈α
is␈α∞similar␈α
to␈α∞that␈α
of␈α∞the␈α
Life
␈↓ α∧␈↓robot.

␈↓ α∧␈↓␈↓ αT10.␈α⊂Philosophy␈α⊂and␈α⊂artificial␈α∂intelligence.␈α⊂ These␈α⊂fields␈α⊂overlap␈α∂in␈α⊂the␈α⊂following␈α⊂way:␈α∂In
␈↓ α∧␈↓order␈αto␈αmake␈αa␈αcomputer␈αprogram␈αbehave␈αintelligently,␈αits␈αdesigner␈αmust␈αbuild␈αinto␈αit␈αa␈α
view␈αof
␈↓ α∧␈↓the␈α
world␈α
in␈α
general,␈α∞apart␈α
from␈α
what␈α
they␈α
include␈α∞about␈α
particular␈α
sciences.␈α
 (The␈α∞skeptic␈α
who
␈↓ α∧␈↓doubts␈αwhether␈αthere␈α
is␈αanything␈αto␈αsay␈α
about␈αthe␈αworld␈αapart␈α
from␈αthe␈αparticular␈αsciences␈α
should
␈↓ α∧␈↓try␈αto␈αwrite␈α
a␈αcomputer␈αprogram␈α
that␈αcan␈αfigure␈αout␈α
how␈αto␈αget␈α
to␈αTimbuktoo,␈αtaking␈αinto␈α
account
␈↓ α∧␈↓not␈α
only␈α
the␈α
facts␈α
about␈αtravel␈α
in␈α
general␈α
but␈α
also␈αfacts␈α
about␈α
what␈α
people␈α
and␈α
documents␈αhave
␈↓ α∧␈↓what␈αinformation,␈αand␈αwhat␈αinformation␈αwill␈αbe␈αrequired␈αat␈αdifferent␈αstages␈αof␈αthe␈αtrip␈αand␈αwhen
␈↓ α∧␈↓and␈α
how␈α
it␈α
is␈α
to␈α
be␈α
obtained.␈α
 He␈α
will␈α
rapidly␈α
discover␈α
that␈α
he␈α
is␈α
lacking␈α
a␈α
␈↓↓science␈α
of␈αcommon␈α
sense␈↓,
␈↓ α∧␈↓i.e.␈α∞he␈α∞will␈α∞be␈α∞unable␈α∞to␈α∞formally␈α∞express␈α∞and␈α∞build␈α∞into␈α∞his␈α∞program␈α∞"what␈α∞everybody␈α
knows".
␈↓ α∧␈↓Maybe␈αphilosophy␈αcould␈αbe␈αdefined␈αas␈αan␈αattempted␈α␈↓↓science␈αof␈αcommon␈αsense␈↓,␈αor␈αelse␈αthe␈α␈↓↓science␈α
of
␈↓ α∧␈↓↓common sense␈↓ should be a definite part of philosophy.)

␈↓ α∧␈↓␈↓ αTArtificial␈α⊂intelligence␈α∂has␈α⊂a␈α∂another␈α⊂component␈α∂in␈α⊂which␈α∂philosophers␈α⊂have␈α⊂not␈α∂studied,
␈↓ α∧␈↓namely␈α∪␈↓↓heuristics␈↓.␈α∪ Heuristics␈α∪is␈α∪concerned␈α∪with:␈α∪given␈α∩the␈α∪facts␈α∪and␈α∪a␈α∪goal,␈α∪how␈α∪should␈α∩it
␈↓ α∧␈↓investigate␈α
the␈α
possibilities␈αand␈α
decide␈α
what␈α
to␈αdo.␈α
 On␈α
the␈α
other␈αhand,␈α
artificial␈α
intelligence␈αis␈α
not
␈↓ α∧␈↓much concerned with aesthetics and ethics.

␈↓ α∧␈↓␈↓ αTNot␈α∪all␈α∀approaches␈α∪to␈α∪philosophy␈α∀lead␈α∪to␈α∪results␈α∀relevant␈α∪to␈α∪the␈α∀artificial␈α∪intelligence
␈↓ α∧␈↓problem.␈α∞ On␈α∂the␈α∞face␈α∞of␈α∂it,␈α∞a␈α∞philosophy␈α∂that␈α∞entailed␈α∞the␈α∂view␈α∞that␈α∞artificial␈α∂intelligence␈α∞was
␈↓ α∧␈↓impossible␈α∂would␈α⊂be␈α∂unhelpful,␈α⊂but␈α∂besides␈α⊂that,␈α∂taking␈α⊂artificial␈α∂intelligence␈α⊂seriously␈α∂suggests
␈↓ α∧␈↓some␈α
philosophical␈αpoints␈α
of␈αview.␈α
 I␈αam␈α
not␈αsure␈α
that␈α
all␈αI␈α
shall␈αlist␈α
are␈αrequired␈α
for␈αpursuing␈α
the
␈↓ α∧␈↓AI goal - some of them may be just my prejudices - but here they are:

␈↓ α∧␈↓␈↓ β$10.1.␈αThe␈αrelation␈αbetween␈α
a␈αworld␈αview␈αand␈αthe␈α
world␈αshould␈αbe␈αstudied␈αby␈α
methods
␈↓ α∧␈↓akin␈αto␈αmetamathematics␈αin␈αwhich␈αsystems␈α
are␈αstudied␈αfrom␈αthe␈αoutside.␈α In␈α
metamathematics␈αwe
␈↓ α∧␈↓study␈α∀the␈α∀relation␈α∀between␈α∪a␈α∀mathematical␈α∀system␈α∀and␈α∪its␈α∀models.␈α∀ Philosophy␈α∀(or␈α∪perhaps
␈↓ α∧␈↓␈↓↓metaphilosophy␈↓)␈αshould␈αstudy␈αthe␈αrelation␈α
between␈αworld␈αstructures␈αand␈αsystems␈αwithin␈α
them␈αthat
␈↓ α∧␈↓seek␈αknowledge.␈α Just␈αas␈αthe␈αmetamathematician␈α
can␈αuse␈αany␈αmathematical␈αmethods␈αin␈α
this␈αstudy
␈↓ α∧␈↓and␈αdistinguishes␈αthe␈α
methods␈αhe␈αuses␈α
form␈αthose␈αbeing␈α
studied,␈αso␈αthe␈α
philosopher␈αshould␈αuse␈α
all
␈↓ α∧␈↓his scientific knowledge in studying philosophical systems from the outside.

␈↓ α∧␈↓␈↓ αTThus␈α⊂the␈α∂question␈α⊂␈↓↓"How␈α⊂do␈α∂I␈α⊂know?"␈↓␈α∂is␈α⊂best␈α⊂answered␈α∂by␈α⊂studying␈α∂␈↓↓"How␈α⊂does␈α⊂it␈α∂know"␈↓,
␈↓ α∧␈↓getting␈αthe␈α
best␈αanswer␈αthat␈α
the␈αcurrent␈αstate␈α
of␈αscience␈αand␈α
philosophy␈αpermits,␈αand␈α
then␈αseeing
␈↓ α∧␈↓how this answer stands up to doubts about one's own sources of knowledge.

␈↓ α∧␈↓␈↓ β$10.2.␈α∞We␈α∞regard␈α
␈↓↓metaphysics␈↓␈α∞as␈α∞the␈α∞study␈α
of␈α∞the␈α∞general␈α
structure␈α∞of␈α∞the␈α∞world␈α
and
␈↓ α∧␈↓␈↓↓epistemology␈↓␈α
as␈α
studying␈α
what␈α
knowledge␈α
of␈α
the␈αworld␈α
can␈α
be␈α
had␈α
by␈α
an␈α
intelligence␈α
with␈αgiven
␈↓ α∧␈↓opportunities␈αto␈αobserve␈αand␈αexperiment.␈α We␈αneed␈αto␈αdistinguish␈αwhat␈αcan␈αbe␈αdetermined␈αabout
␈↓ α∧␈↓the␈α∃structure␈α∃of␈α⊗humans␈α∃and␈α∃machines␈α⊗by␈α∃scientific␈α∃research␈α∃over␈α⊗a␈α∃period␈α∃of␈α⊗time␈α∃and
␈↓ α∧␈↓experimenting␈αwith␈αmany␈αindividuals␈αfrom␈αwhat␈αcan␈αbe␈αlearned␈αby␈αin␈αa␈αparticular␈αsituation␈αwith
␈↓ α∧␈↓particular␈αopportunities␈α
to␈αobserve.␈α From␈α
the␈αAI␈α
point␈αof␈αview,␈α
the␈αlatter␈αis␈α
as␈αimportant␈α
as␈αthe
␈↓ α∧␈↓former,␈α⊃and␈α⊃we␈α∩suppose␈α⊃that␈α⊃philosophers␈α∩would␈α⊃also␈α⊃consider␈α∩it␈α⊃part␈α⊃of␈α∩epistemology.␈α⊃ The
␈↓ α∧␈↓␈↓ f25


␈↓ α∧␈↓possibilities␈α⊃of␈α⊂reductionism␈α⊃are␈α⊂also␈α⊃different␈α⊂for␈α⊃theoretical␈α⊂and␈α⊃everyday␈α⊃epistemology.␈α⊂ We
␈↓ α∧␈↓could␈α∂imagine␈α∂that␈α∂the␈α∂rules␈α∂of␈α∂everyday␈α∂epistemology␈α∂could␈α∂be␈α∂deduced␈α∂from␈α∂a␈α∂knowledge␈α∂of
␈↓ α∧␈↓physics␈αand␈αthe␈αstructure␈αof␈αthe␈αbeing␈αand␈αthe␈αworld,␈αbut␈αwe␈αcan't␈αsee␈αhow␈αone␈αcould␈αavoid␈αusing
␈↓ α∧␈↓mental concepts in expressing knowledge actually obtained by the senses.

␈↓ α∧␈↓␈↓ β$10.3.␈α It␈αis␈α
now␈αaccepted␈αthat␈α
the␈αbasic␈αconcepts␈αof␈α
physical␈αtheories␈αare␈α
far␈αremoved
␈↓ α∧␈↓from␈α∩observation.␈α∩ The␈α∩human␈α∩sense␈α∩organs␈α∩are␈α∩many␈α∩levels␈α∩of␈α∩organization␈α∪removed␈α∩from
␈↓ α∧␈↓quantum␈α∪mechanical␈α∪states,␈α∪and␈α∀we␈α∪have␈α∪learned␈α∪to␈α∀accept␈α∪the␈α∪complication␈α∪this␈α∀causes␈α∪in
␈↓ α∧␈↓verifying␈α
physical␈α∞theories.␈α
Experience␈α
in␈α∞trying␈α
to␈α
make␈α∞intelligent␈α
computer␈α∞programs␈α
suggests
␈↓ α∧␈↓that␈α⊂the␈α⊃basic␈α⊂concepts␈α⊂of␈α⊃the␈α⊂common␈α⊂sense␈α⊃world␈α⊂are␈α⊂also␈α⊃complex␈α⊂and␈α⊂not␈α⊃always␈α⊂directly
␈↓ α∧␈↓accessible␈α∂to␈α∂observation.␈α∂ In␈α∞particular,␈α∂the␈α∂common␈α∂sense␈α∂world␈α∞is␈α∂not␈α∂a␈α∂construct␈α∂from␈α∞sense
␈↓ α∧␈↓data,␈αbut␈αsense␈αdata␈αplay␈αan␈αimportant␈αrole.␈α When␈αa␈αman␈αor␈αa␈αcomputer␈αprogram␈αsees␈αa␈αdog,␈αwe
␈↓ α∧␈↓will␈α⊃need␈α⊃both␈α⊃the␈α∩relation␈α⊃between␈α⊃the␈α⊃observer␈α⊃and␈α∩the␈α⊃dog␈α⊃and␈α⊃the␈α⊃relation␈α∩between␈α⊃the
␈↓ α∧␈↓observer and the brown patch in order to construct a good theory of the event.

␈↓ α∧␈↓␈↓ β$10.4.␈α In␈αspirit␈αthis␈αpaper␈αis␈αmaterialist,␈αbut␈αit␈αis␈αlogically␈αcompatible␈αwith␈αsome␈αother
␈↓ α∧␈↓philosophies.␈α∂ Thus␈α⊂cellular␈α∂automaton␈α∂models␈α⊂of␈α∂the␈α∂physical␈α⊂world␈α∂may␈α∂be␈α⊂supplemented␈α∂by
␈↓ α∧␈↓supposing␈αthat␈αcertain␈αcomplex␈αconfigurations␈α
interact␈αwith␈αadditional␈αautomata␈αcalled␈α
souls␈αthat
␈↓ α∧␈↓also␈α∪interact␈α∩with␈α∪each␈α∩other.␈α∪ Such␈α∩␈↓↓interactionist␈α∪dualism␈↓␈α∩won't␈α∪meet␈α∩emotional␈α∪or␈α∩spiritual
␈↓ α∧␈↓objections␈αto␈αmaterialism,␈αbut␈αit␈αdoes␈αprovide␈αa␈αlogical␈αniche␈αfor␈αany␈αempirically␈αargued␈αbelief␈αin
␈↓ α∧␈↓telepathy,␈α⊃communication␈α⊂with␈α⊃the␈α⊂dead,␈α⊃and␈α⊃such␈α⊂other␈α⊃psychic␈α⊂phenomena␈α⊃as␈α⊃don't␈α⊂require
␈↓ α∧␈↓tampering␈α
with␈α
causality.␈α
 (As␈α
does␈α
precognition,␈α
for␈α
example).␈α
 A␈α
person␈α
who␈α
believed␈αthe␈α
alleged
␈↓ α∧␈↓evidence␈αfor␈αsuch␈αphenomena␈α
and␈αstill␈αwanted␈αscientific␈α
explanations␈αcould␈αmodel␈αhis␈αbeliefs␈α
with
␈↓ α∧␈↓auxiliary automata.
␈↓ α∧␈↓␈↓ f26


␈↓ α∧␈↓αREFERENCES

␈↓ α∧␈↓␈↓αArmstrong,␈αD.M.␈↓␈α
(1968),␈α␈↓↓A␈αMaterialist␈α
Theory␈αof␈α
the␈αMind␈↓,␈αRoutledge␈α
and␈αKegan␈α
Paul,␈αLondon
␈↓ α∧␈↓and New York.

␈↓ α∧␈↓␈↓αBoden, Margaret A.␈↓ (1972), ␈↓↓Purposive Explanation in Psychology␈↓, Harvard University Press.

␈↓ α∧␈↓␈↓αCarnap, Rudolf␈↓ (1956), ␈↓↓Meaning and Necessity␈↓, University of Chicago Press.

␈↓ α∧␈↓␈↓αDavidson,␈α
Donald␈↓␈α
(1973)␈α
The␈α
Material␈α
Mind.␈α␈↓↓Logic,␈α
Methodology␈α
and␈α
Philosophy␈α
of␈α
Science␈αIV␈↓,
␈↓ α∧␈↓P. Suppes, L. Henkin, C. Moisil, and A. Joja (eds.), Amsterdam, North-Holland.

␈↓ α∧␈↓␈↓αDennett, D.C.␈↓ (1971) Intentional Systems.  ␈↓↓Journal of Philosophy␈↓ vol. 68, No. 4, Feb. 25.

␈↓ α∧␈↓␈↓αGosper,␈αR.W.␈↓␈α(1976)␈αPrivate␈αCommunication.␈α (Much␈αinformation␈αabout␈αLife␈αhas␈αbeen␈αprinted␈αin
␈↓ α∧␈↓Martin Gardner's column in ␈↓↓Scientific American␈↓, and there is a magazine called ␈↓↓Lifeline␈↓).

␈↓ α∧␈↓␈↓αLewis, David␈↓ (1973), ␈↓↓Counterfactuals␈↓, Harvard University Press.

␈↓ α∧␈↓␈↓αMcCarthy,␈α⊃John␈↓␈α⊃(1959)␈α⊃Programs␈α∩with␈α⊃Common␈α⊃Sense.␈α⊃␈↓↓Mechanisation␈α⊃of␈α∩Thought␈α⊃Processes,
␈↓ α∧␈↓↓Volume I␈↓.  London:HMSO.

␈↓ α∧␈↓␈↓αMcCarthy,␈α∪J.␈α∩and␈α∪Hayes,␈α∩P.J.␈↓␈α∪(1969)␈α∪Some␈α∩Philosophical␈α∪Problems␈α∩from␈α∪the␈α∪Standpoint␈α∩of
␈↓ α∧␈↓Artificial␈α∩Intelligence.␈α∩␈↓↓Machine␈α∩Intelligence␈α∩4␈↓,␈α∩pp.␈α⊃463-502␈α∩(eds.␈α∩Meltzer,␈α∩B.␈α∩and␈α∩Michie,␈α⊃D.).
␈↓ α∧␈↓Edinburgh: Edinburgh University Press.

␈↓ α∧␈↓␈↓αMcCarthy,␈α↔John␈↓␈α↔(1977a),␈α↔␈↓↓First␈α↔Order␈α⊗Theories␈α↔of␈α↔Individual␈α↔Concepts␈↓,␈α↔Stanford␈α⊗Artificial
␈↓ α∧␈↓Intelligence Laboratory, (to be published).

␈↓ α∧␈↓␈↓αMcCarthy,␈αJohn␈↓␈α(1977b),␈α␈↓↓Circumscription␈α
-␈αA␈αWay␈αof␈α
Jumping␈αto␈αConclusions␈↓,␈αStanford␈α
Artificial
␈↓ α∧␈↓Intelligence Laboratory, (to be published).

␈↓ α∧␈↓␈↓αMontague,␈α⊃Richard␈↓␈α⊂(1963),␈α⊃Syntactical␈α⊂Treatments␈α⊃of␈α⊂Modality,␈α⊃with␈α⊂Corollaries␈α⊃on␈α⊂Reflexion
␈↓ α∧␈↓Principles and Finite Axiomatizability, ␈↓↓Acta Philosophica Fennica␈↓ ␈↓α16␈↓:153-167.

␈↓ α∧␈↓␈↓αMoore,␈α↔E.F.␈↓␈α_(1956),␈α↔Gedanken␈α↔Experiments␈α_with␈α↔Sequential␈α↔Machines.␈α_ ␈↓↓Automata␈α↔Studies␈↓.
␈↓ α∧␈↓Princeton University Press.

␈↓ α∧␈↓␈↓αMoore,␈α∀Robert␈α∀C.␈↓␈α∀(1975),␈α∀␈↓↓Reasoning␈α∀from␈α∀Incomplete␈α∀Knowledge␈α∀in␈α∀a␈α∀Procedural␈α∪Deduction
␈↓ α∧␈↓↓System, M.S. Thesis, M.I.T.

␈↓ α∧␈↓↓␈↓αPutnam,␈αHilary␈↓␈α
(1961)␈αMinds␈αand␈α
Machines,␈αin␈α
␈↓↓Dimensions␈αof␈αMind␈↓,␈α
Sidney␈αHook␈α
(ed.),␈αCollier
␈↓ α∧␈↓Books, New York.

␈↓ α∧␈↓␈↓αPutnam,␈α∩Hilary␈↓␈α∪(1970),␈α∩On␈α∪Properties,␈α∩in␈α∪␈↓↓Essays␈α∩in␈α∩Honor␈α∪of␈α∩Carl␈α∪G.␈α∩ Hempel␈↓,␈α∪D.␈α∩Reidel
␈↓ α∧␈↓Publishing Co., Dordrecht, Holland.

␈↓ α∧␈↓␈↓αRyle, Gilbert␈↓ (1949), ␈↓↓The Concept of Mind␈↓, Hutchinson and Company, London.